Showing posts with label file. Show all posts
Showing posts with label file. Show all posts

Monday, March 19, 2012

Multiple Trans Log Restores

Is there a way to restore multiple transactions logs when
you don't know the name of the logs files (i.e. the last 8
characters of the log file changes)?
Thanks,
JoshYou could either use xp_cmdshell and DIR into a temp table and work your code from there, or base
your restore on the backup history tables in msdb (assuming that the backup was taken on the same
machine). Both options requires a bit of coding.
--
Tibor Karaszi, SQL Server MVP
Archive at: http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"Joshua Lakin" <LakinJ@.NOSPAMPLEASE.bsci.com> wrote in message
news:00a501c3c58f$8474d3f0$a401280a@.phx.gbl...
> Is there a way to restore multiple transactions logs when
> you don't know the name of the logs files (i.e. the last 8
> characters of the log file changes)?
> Thanks,
> Josh

Monday, March 12, 2012

multiple tempdb filegroup

If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup,
or is it better (or required!) if they are all members of a single filegroup?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
Josh
If you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromance
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_shea/archive/2007/01/29/how-many-data-files-should-i-create-for-a-user-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:

> If I want to try the multiple tempdb file trick (have number of files equal
> to number of processors), is it better if they are each a separate filegroup,
> or is it better (or required!) if they are all members of a single filegroup?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm is
> to try to use the largest file first, it will quickly become not the largest
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh

multiple tempdb filegroup

If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup
,
or is it better (or required!) if they are all members of a single filegroup
?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
JoshIf you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromanc
e
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_she...r-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:

> If I want to try the multiple tempdb file trick (have number of files equa
l
> to number of processors), is it better if they are each a separate filegro
up,
> or is it better (or required!) if they are all members of a single filegro
up?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm i
s
> to try to use the largest file first, it will quickly become not the large
st
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh

multiple tempdb filegroup

If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup,
or is it better (or required!) if they are all members of a single filegroup?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
JoshIf you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromance
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_shea/archive/2007/01/29/how-many-data-files-should-i-create-for-a-user-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:
> If I want to try the multiple tempdb file trick (have number of files equal
> to number of processors), is it better if they are each a separate filegroup,
> or is it better (or required!) if they are all members of a single filegroup?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm is
> to try to use the largest file first, it will quickly become not the largest
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh

Multiple tasks in the same assembly

Hello,
All the SSIS tasks (and components come to mention it) exist in their own assembly file.

Is there anything that stops us from putting multiple tasks into one assembly?
Assuming you can do that, is there a reason why this perhaps should not be done?

Thanks
JamieActually, the percent sampling and row sampling transforms are in the same DLL.

There is nothing to stop you from having multiple tasks or multiple components in the same DLL. I can't even think of a reason why you couldn't have both a task and a component in the same DLL. In general, the reason why you wouldn't do that is ease of fix and/or upgrade. If you put 10 transforms in one DLL then if you only changed one you still have to "update" all the others because the DLL has changed. If they are all in separate DLLs then you only update the one that changed (i.e. instead of having to update a 5MB DLL you only have to update a 20KB DLL).

HTH,
Matt|||

Matt David wrote:

Actually, the percent sampling and row sampling transforms are in the same DLL.

There is nothing to stop you from having multiple tasks or multiple components in the same DLL. I can't even think of a reason why you couldn't have both a task and a component in the same DLL. In general, the reason why you wouldn't do that is ease of fix and/or upgrade. If you put 10 transforms in one DLL then if you only changed one you still have to "update" all the others because the DLL has changed. If they are all in separate DLLs then you only update the one that changed (i.e. instead of having to update a 5MB DLL you only have to update a 20KB DLL).

HTH,
Matt

Thanks Matt,

I thought it was a requirement that the DLLs for Tasks went in the ..\Tasks folder and the DLLs for components went in the ..\PipelineComponents folder. How could you do that if they were in the same DLL?

-Jamie|||

Jamie Thomson wrote:

Thanks Matt,

I thought it was a requirement that the DLLs for Tasks went in the ..\Tasks folder and the DLLs for components went in the ..\PipelineComponents folder. How could you do that if they were in the same DLL?

-Jamie


Yes, the same DLL would need to go to both of these places (plus Global Assembly Cache). Still, you build just one DLL.

Michael.|||I was working through this and have decided to go for the multiple assembly approach, because of the deployment and install ease.|||

You may want to consider going one step further and separating Task/Component and User Interface into 2 distinct assemblies as well. This allows you to modify one without modifying the other, although the deployment benefit is a bit less obvious in this case.

In the main Task/Component class, the UITypeName property in the Attribute links your main assembly to the UI assembly.

-Doug

|||

One benefit of separating the UI from the task is that you don't have UI code on the server for security reasons. (Easier to lock down).
The reason we did this in the beginning was to make the task dlls smaller so that if we ever shipped something like MSDE (which has no UI), we could ship the tasks without the UI to make the distribution smaller (for web release etc.)
This might be useful for those writing custom tasks as well, though I can't think of when you might wish to ship a task without the UI.
K

Friday, March 9, 2012

Multiple Suspect Databases Please help

I administer a server with 300 small databases. It appears to be some type of file corruption and now all the databases are suspect.
I am getting the following errors:
Cannot associate files with differnet databases.
Also: Log file does not match the primary file. It may be from a different database or the log may have been rebuilt previously.
Device activation error...
Now I read several online articles and determined the only way I can recover these databases, is using "sp_attach_single_file_db"
Now, with 300 databases, that would take several hours. That is the last thing I want to do.
Can someone please help. I dunno what to do. The backups I have are outdated.What events have occured to get you to this stage? Form the errors you =describe then it sounbds as though the pathnames for one db have become =scrambled with another. I can think of no waythis could occur other than =manually updating system tables incorrectly.
Anyway, if you can describe how you got to this point - as much detail =as possible then someone may well be able to help. Beig able to rescue =the data vis sp-attach is somewhat dodgy as the databases have not been =detached via sp_detach_db. To try it i would copy ALL files (data and =logs) somewhere safe, then generate as script to try the attaches. =(backup master first in case you mess it up)
Alternatively open a call with PSS to get someone helping.
Mike John
"Richard M." <anonymous@.discussions.microsoft.com> wrote in message =news:4E2AD297-67F2-40FC-AE04-2B601E5E5BFD@.microsoft.com...
> I administer a server with 300 small databases. It appears to be some =type of file corruption and now all the databases are suspect. > > I am getting the following errors:
> > Cannot associate files with differnet databases. > > Also: Log file does not match the primary file. It may be from a =different database or the log may have been rebuilt previously.
> > Device activation error...
> > > Now I read several online articles and determined the only way I can =recover these databases, is using "sp_attach_single_file_db"
> > Now, with 300 databases, that would take several hours. That is the =last thing I want to do. > > Can someone please help. I dunno what to do. The backups I have are =outdated.|||Hi ,
THank you for your reply. You are right! The "sp_attach_single_file_db " did work, but with 300 DBs it would take for ever. However, as luck may have it the hard drive on the SQL server containing the OS completely crashed! It went dead. So, the cause seems to be a combination of file curruption due to hardware failure. So, what I did was create a new server and migrated the data and log files just before the server completely died. On the new server, when I installed SQL and attempted to recover by installing the exact version and replacing the existing ystem databases witht he original so the server can see all the databases, the server completely all the databases were suspect. Hence, this means that allt eh files I transferred were corrupted some how. Now, I have two options:
A: Manually use "sp_attach_single_file_db " on all databases to recover it since it seems to work or
B: Bring the old server back online... and attempt to rectify a hand full of suspect databases.
I choose B, however the OS drive was completely dead. Luckily the drivers containing the data files and the otehr drive containg the log files were OK. The server was then rebuilt, with a new OS drive. Win2K server ws installed and surely it saw the other two drives with the data and log files. I then installed SQL server and the updates. As luck may have it, all the databses were ther with an exception of a dozen datasbes still suspect. S, I then applied "sp_attach_single_file_db : to repair those dozen databases and performed an immediate backup for all the databases. Right now, everything seems to be working but I just don't trust those drives. What do you think?
Regards,
Richard|||I would scrsap the old drives and rebuild something like:
Raid 1 System drives.
Raid 0 + 1 for data
Raid 1 for logs (different set of drives to data)
regular log and data backups to differnet discs
NT backup of 'backup discs' to tape.
Sleep at night as you need several discs to go pop at the same time to =get a real problem!
Good luck.
Mike John
"Richard M." <richard@.richardtheman.com> wrote in message =news:297A2516-6BC0-4B23-9F13-ADA4515EF88D@.microsoft.com...
> Hi ,
> > THank you for your reply. You are right! The "sp_attach_single_file_db =" did work, but with 300 DBs it would take for ever. However, as luck =may have it the hard drive on the SQL server containing the OS =completely crashed! It went dead. So, the cause seems to be a =combination of file curruption due to hardware failure. So, what I did =was create a new server and migrated the data and log files just before =the server completely died. On the new server, when I installed SQL and =attempted to recover by installing the exact version and replacing the =existing ystem databases witht he original so the server can see all the =databases, the server completely all the databases were suspect. Hence, =this means that allt eh files I transferred were corrupted some how. =Now, I have two options: > > A: Manually use "sp_attach_single_file_db " on all databases to =recover it since it seems to work or
> > B: Bring the old server back online... and attempt to rectify a hand =full of suspect databases.
> > > I choose B, however the OS drive was completely dead. Luckily the =drivers containing the data files and the otehr drive containg the log =files were OK. The server was then rebuilt, with a new OS drive. Win2K =server ws installed and surely it saw the other two drives with the data =and log files. I then installed SQL server and the updates. As luck may =have it, all the databses were ther with an exception of a dozen =datasbes still suspect. S, I then applied "sp_attach_single_file_db : to =repair those dozen databases and performed an immediate backup for all =the databases. Right now, everything seems to be working but I just =don't trust those drives. What do you think?
> > Regards,
> > Richard > > > > >

Wednesday, March 7, 2012

multiple sheets in excel

I know how to use ssis to import data from the sheet in an excel file.
How do you use ssis to get the data out from different sheets into the database?
Is there such functionality please?
Thanks

arkiboys wrote:

I know how to use ssis to import data from the sheet in an excel file.
How do you use ssis to get the data out from different sheets into the database?
Is there such functionality please?
Thanks

Hi,

I haven't tried this yet, but, there can be at least 2 ways of doing this:

1) Create separate Data Flow Tasks for each sheet.

2) Use the Foreach Loop Container to loop through all the available sheets in the Excel, provided, the structure remains same.

Regards,

B@.ns

|||

I know how to loop through files but not sheets in one file

Do you know how this is done please?

|||I used Sript and following code:


Code Snippet

Private Sub ShowExcelWorksheets(ByVal File As String)
Dim excelTables As System.Data.DataTable
Dim excelTable As System.Data.DataRow
'Dim currentTable As String
Dim excelConnection As System.Data.OleDb.OleDbConnection = New System.Data.OleDb.OleDbConnection(ScriptMain.CreateExcelConnectionString(File))
excelConnection.Open()
excelTables = excelConnection.GetSchema("Tables")
For Each excelTable In excelTables.Rows
'Me.ComboBoxWorkSheet.Items.Add(excelTable.Item("TABLE_NAME").ToString)
Next
excelConnection.Close()

End Sub


|||

There is a good how-to article on using the Foreach loop with ADO.NET Schema Rowset enumerator to do this.

http://technet.microsoft.com/en-us/library/ms345182.aspx

multiple sheets in excel

I know how to use ssis to import data from the sheet in an excel file.
How do you use ssis to get the data out from different sheets into the database?
Is there such functionality please?
Thanks

arkiboys wrote:

I know how to use ssis to import data from the sheet in an excel file.
How do you use ssis to get the data out from different sheets into the database?
Is there such functionality please?
Thanks

Hi,

I haven't tried this yet, but, there can be at least 2 ways of doing this:

1) Create separate Data Flow Tasks for each sheet.

2) Use the Foreach Loop Container to loop through all the available sheets in the Excel, provided, the structure remains same.

Regards,

B@.ns

|||

I know how to loop through files but not sheets in one file

Do you know how this is done please?

|||I used Sript and following code:


Code Snippet

Private Sub ShowExcelWorksheets(ByVal File As String)
Dim excelTables As System.Data.DataTable
Dim excelTable As System.Data.DataRow
'Dim currentTable As String
Dim excelConnection As System.Data.OleDb.OleDbConnection = New System.Data.OleDb.OleDbConnection(ScriptMain.CreateExcelConnectionString(File))
excelConnection.Open()
excelTables = excelConnection.GetSchema("Tables")
For Each excelTable In excelTables.Rows
'Me.ComboBoxWorkSheet.Items.Add(excelTable.Item("TABLE_NAME").ToString)
Next
excelConnection.Close()

End Sub


|||

There is a good how-to article on using the Foreach loop with ADO.NET Schema Rowset enumerator to do this.

http://technet.microsoft.com/en-us/library/ms345182.aspx