If I want to try the multiple tempdb file trick (have number of files equal
to number of processors), is it better if they are each a separate filegroup
,
or is it better (or required!) if they are all members of a single filegroup
?
Also, while the kb article recommends using fixed-size and equal-sized
files, it seems to me that is should not really matter, if the algorithm is
to try to use the largest file first, it will quickly become not the largest
and we will get pretty much the same round-robin as the much more limited
configurations would see.
Anyone like to comment on the performance improvements, if any, likely?
This is mostly sequential batch processing, but with fair millions of
records, and a lot more use of group-by, mostly on current 2 * dualcore
systems, and doesn't actually have a lot of tempdb contention that I've
noticed, though we'd like to see better
performance in any case.
Thanks.
JoshIf you do use multiple filegroups, you would be using a very unusually and
uncommon tempdb configuration. My suggestion is, forget about multiple
filegroups. What matters is the number of data files.
I have not done any performance tests myself on the impact of the number of
the tempdb data files, but I've done tests on the performance impact of the
number of data files in a user database, and there is significant perfromanc
e
impact if there is significant allocation activity.
http://sqlblog.com/blogs/linchi_she...r-database.aspx
Since tempdb by its very nature is allocation intensive, I would not
question the impact of the recommend configuration on the tempdb performance
(and therefore the SQL Server performance).
Linchi
"JRStern" wrote:
> If I want to try the multiple tempdb file trick (have number of files equa
l
> to number of processors), is it better if they are each a separate filegro
up,
> or is it better (or required!) if they are all members of a single filegro
up?
> Also, while the kb article recommends using fixed-size and equal-sized
> files, it seems to me that is should not really matter, if the algorithm i
s
> to try to use the largest file first, it will quickly become not the large
st
> and we will get pretty much the same round-robin as the much more limited
> configurations would see.
>
> Anyone like to comment on the performance improvements, if any, likely?
> This is mostly sequential batch processing, but with fair millions of
> records, and a lot more use of group-by, mostly on current 2 * dualcore
> systems, and doesn't actually have a lot of tempdb contention that I've
> noticed, though we'd like to see better
> performance in any case.
> Thanks.
> Josh
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment