I have a query that performs quite nicely when run like this: SELECT YADAYADA FROM MYTABLE WHERE FVAL <= 100 AND TVAL >= 100 Since there’s an index for (FVAL,TVAL), the query is totally optimal as a nonclustered index seek for the whole query. Now, it’d be nice to use a constant returned from a […]
I have a SQL Server database designed like this : TableParameter Id (int, PRIMARY KEY, IDENTITY) Name1 (string) Name2 (string, can be null) Name3 (string, can be null) Name4 (string, can be null) TableValue Iteration (int) IdTableParameter (int, FOREIGN KEY) Type (string) Value (decimal) So, as you’ve just understood, TableValue is linked to TableParameter. TableParameter […]
I have rebuilt indexes and updated statistics. The query is straightforward, with a subquery in the WHERE clause. SELECT TOP 1 * from MeetingPost_reg WHERE userid = 1234 AND meetingpost_regid <> 9999 AND DateStart < (SELECT DateStart FROM MeetingPost_reg WHERE meetingpost_regid = 9999) ORDER BY DateStart desc There is an index on datastart, userid. meetingpost_regid […]
I would like to know how to measure/know that one’s stored procedure is optimum in performance. That stored procedure can be inserting/deleting/updating or manipulating data in that procedure. Please describe the way/approach how to know performance in SQL Server. Thanks in advance!
Recently, I’ve been asked to write a query to select properties of entities from a group that contains the maximum number of such entities. So, I did it on Northwind (MSFT distributed sample) database in a couple of ways. ONE: SELECT cat.CategoryName, prod.ProductName FROM Categories cat JOIN Products prod ON cat.CategoryID = prod.CategoryID JOIN (SELECT […]
I use temp tables frequently to simplify data loads (easier debugging, cleaner select statements, etc). If performance demands it, I’ll create a physical table etc. I noticed recently that I automatically declare my temp tables as global (##temp_load) as opposed to local (#temp_table). I don’t know why but that’s been my habit for years. I […]
I would like to know the impact on performance if I run this query in the following conditions. Query: select `players`.*, count(`clicks`.`id`) as `clicks_count` from `players` left join `clicks` on `clicks`.`player_id` = `players`.`id` group by `players`.`id` order by `clicks_count` desc limit 1 Conditions: In the clicks table I expect to get insert 1000 times in […]
I have the following query (slightly amended for clarity): CREATE PROCEDURE Kctc.CaseTasks_GetCaseTasks @CaseNumber int … other parameters ,@ChangedBefore datetime ,@ChangedAfter datetime AS SELECT Kctc.CaseTasks.CaseTaskId …blah blah blah FROM Kctc.CaseTasks … some joins here WHERE … some normal where clauses AND ( (@ChangedAfter IS NULL AND @ChangedBefore IS NULL) OR EXISTS (SELECT * FROM Kctc.FieldChanges WHERE […]
I have a table with 200 columns (maybe more…) a1 a2 a3 a4 a5 …a200 ——————————— 1.2 2.3 4.4 5.1 6.7… 11.9 7.2 2.3 4.3 5.1 4.7… 3.9 1.9 5.3 3.3 5.1 3.7… 8.9 5.2 2.7 7.4 9.1 1.7… 2.9 I would like to compute many operations: SUM(every column) AVG(every column) SQRT(SUM(every column)) POWER(SUM(every column),2) […]
Possible Duplicate: Which is faster/best? SELECT * or SELECT column1, colum2, column3, etc. SELECT * FROM tbl1 and SELECT field1, field2,…… FROM tbl1 how does SQL engine execute the previous two select statements (step by step) and which one performs better?