Pages

Monday, October 13, 2008

Database design

1.What is normalization? Explain different levels of normalization?

Check out the article Q100139 from Microsoft knowledge base and of course,

there's much more information available in the net. It'll be a good idea to get a hold of any RDBMS fundamentals textbook, especially the one by C. J. Date. Most of the times, it will be okay if you can explain till third normal form.

2.What is denormalization and when would you go for it?

As the name indicates, denormalization is the reverse process of normalization. It's the controlled introduction of redundancy in to the database design. It helps improve the query performance as the number of joins could be reduced.

3. How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables?

One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships.

One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships.

Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table.

4.What's the difference between a primary key and a unique key?

Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column,where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.

5.What are user defined data types and when you should go for them?

User defined data types let you extend the base SQL Server datatypes by providing a descriptive name, and format to the database. Take for example, in your database, there is a column called Flight_Num which appears in many tables. In all these tables it should be varchar(8). In this case you could create a user defined datatype called Flight_num_type of varchar(8) and use it across all your tables.

See sp_addtype, sp_droptype in books online.

6.What is bit data type and what's the information that can be stored inside a bit column?

Bit datatype is used to store boolean information like 1 or 0 (true or false). Untill SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.

7. Define candidate key, alternate key, composite key.

A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys.

A key formed by combining at least two or more columns is called composite key.

8.What are defaults? Is there a column to which a default can't be bound?

A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and timestamp columns can't have defaults bound to them. See CREATE DEFUALT in books online.

SQL Server architecture

9.What is a transaction and what are ACID properties?

A transaction is a logical unit of work in which, all the steps must be performed or none. ACID stands for Atomicity, Consistency, Isolation, Durability. These are the properties of a transaction. For more information and explanation of these properties, see SQL Server books online or any RDBMS fundamentals text book.

10.Explain different isolation levels

An isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level is Read Committed. Here are the other isolation levels (in the ascending order of isolation): Read Uncommitted, Read Committed, Repeatable Read, Serializable. See SQL Server books online for an explanation of the isolation levels. Be sure to read about SET TRANSACTION ISOLATION LEVEL, which lets you customize the isolation level at the connection level.

CREATE INDEX myIndex ON myTable(myColumn)

11.What type of Index will get created after executing the above statement?

Non-clustered index. Important thing to note: By default a clustered index gets created on the primary key, unless specified otherwise.

12.What's the maximum size of a row?

8060 bytes. Don't be surprised with questions like 'what is the maximum number of columns per table'. Check out SQL Server books online for the page titled: "Maximum Capacity Specifications".

13.Explain Active/Active and Active/Passive cluster configurations

Hopefully you have experience setting up cluster servers. But if you don't, at least be familiar with the way clustering works and the two clustering configurations Active/Active and Active/Passive. SQL Server books online have enough information on this topic and there is a good white paper available on Microsoft site.

14.Explain the architecture of SQL Server

This is a very important question and you better be able to answer it if consider yourself a DBA. SQL Server books online are the best place to read about SQL Server architecture. Read up the chapter dedicated to SQL Server Architecture.

15.What is lock escalation?

Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL Server 6.5, but from SQL Server 7.0 onwards it's dynamically managed by SQL Server.

16.What's the difference between DELETE TABLE and TRUNCATE TABLE commands?

DELETE TABLE is a logged operation, so the deletion of each row gets logged in the transaction log, which makes it slow. TRUNCATE TABLE also deletes all the rows in a table, but it won't log the deletion of each row, instead it logs the deallocation of the data pages of the table, which makes it faster. Of course, TRUNCATE TABLE can be rolled back.

17.Explain the storage models of OLAP

Check out MOLAP, ROLAP and HOLAP in SQL Server books online for more information.

18.What are the new features introduced in SQL Server 2000 (or the latest release of SQL Server at the time of your interview)? What changed between the previous version of SQL Server and the current version?

This question is generally asked to see how current is your knowledge. Generally there is a section in the beginning of the books online titled "What's New", which has all such information. Of course, reading just that is not enough, you should have tried those things to better answer the questions. Also check out the section titled "Backward Compatibility" in books online, which talks about the changes that have taken place in the new version.

19.What are constraints? Explain different types of constraints.

Constraints enable the RDBMS enforce the integrity of the database automatically, without needing you to create triggers, rule or defaults. Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY For an explanation of these constraints see books online for the pages titled: "Constraints" and "CREATE TABLE", "ALTER TABLE"

20.Whar is an index? What are the types of indexes? How many clustered indexes can be created on a table? I create a separate index on each column of a table. What are the advantages and disadvantages of this approach?

Indexes in SQL Server are similar to the indexes in books. They help SQL Server retrieve the data quicker. Indexes are of two types. Clustered indexes and non-clustered indexes. When you create clustered indexes on a table, all the rows in the table are stored in the order of the clustered index key. So, there can be only one clustered index per table. Non-clustered indexes have their own storage separate from the table data storage. Non-clustered indexes are stored as B-tree structures (so do clustered indexes), with the leaf level nodes having the index key and it's row locater. The row located could be the RID or the Clustered index key, depending up on the absence or presence of clustered index on the table. If you create an index on each column of a table, it improves the query performance, as the query optimizer can choose from all the existing indexes to come up with an efficient execution plan. At the same time, data modification operations (such as INSERT, UPDATE, DELETE) will become slow, as every time data changes in the table, all the indexes need to be updated. Another disadvantage is that, indexes need disk space, the more indexes you have, more disk space is used.

Database administration

21.What is RAID and what are different types of RAID configurations?

RAID stands for Redundant Array of Inexpensive Disks, used to provide fault tolerance to database servers. There are six RAID levels 0 through 5 offering different levels of performance, fault tolerance. MSDN has some information about RAID levels and for detailed information; check out the RAID advisory board's homepage

22.What are the steps you will take to improve performance of a poor performing query?

This is a very open ended question and there could be a lot of reasons behind the poor performance of a query. But some general issues that you could talk about would be: No indexes, table scans, missing or out of date statistics, blocking, excess recompilations of stored procedures, procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily complicated joins, too much normalization, excess usage of cursors and temporary tables. Some of the tools/ways that help you troubleshooting performance problems are: SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer.

Download the white paper on performance tuning SQL Server from Microsoft web site. Don't forget to check out sql-server-performance.com

23.What are the steps you will take, if you are tasked with securing an SQL Server?

Again this is another open-ended question. Here are some things you could talk about: Preferring NT authentication, using server, database and application roles to control access to the data, securing the physical database files using NTFS permissions, using an unguessable SA password, restricting physical access to the SQL Server, renaming the Administrator account on the SQL Server computer, disabling the Guest account, enabling auditing, using multiprotocol encryption, setting up SSL, setting up firewalls, isolating SQL Server from the web server etc.

Read the white paper on SQL Server security from Microsoft website. Also check out My SQL Server security best practices

24.What is a deadlock and what is a live lock? How will you go about resolving deadlocks?

Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.

A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.

Check out SET DEADLOCK_PRIORITY and "Minimizing Deadlocks” in SQL Server books online. Also check out the article Q169960 from Microsoft knowledge base.

25.What is blocking and how would you troubleshoot it?

Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first. Read up the following topics in SQL Server books online: Understanding and avoiding blocking, coding efficient transactions.

26.Explain CREATE DATABASE syntax

Many of us are used to creating databases from the Enterprise Manager or by just issuing the command: CREATE DATABASE MyDB. But what if you have to create a database with two file groups, one on drive C and the other on drive D with log on drive E with an initial size of 600 MB and with a growth factor of 15%? That's why being a DBA you should be familiar with the CREATE DATABASE syntax. Check out SQL Server books online for more information.

27.How to restart SQL Server in single user mode? How to start SQL Server in minimal configuration mode?

SQL Server can be started from command line, using the SQLSERVR.EXE. This EXE has some very important parameters with which a DBA should be familiar with. -m is used for starting SQL Server in single user mode and -f is used to start the SQL Server in minimal configuration mode. Check out SQL Server books online for more parameters and their explanations. As a part of your job, what are the DBCC commands that you commonly use for database maintenance?

DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKCATALOG, DBCC CHECKALLOC, DBCC

SHOWCONTIG, DBCC SHRINKDATABASE, DBCC SHRINKFILE etc. But there are whole loads of DBCC commands, which are very useful for DBAs. Check out SQL Server books online for more information.

28.What are statistics, under what circumstances they go out of date, how do you update them?

Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.

Some situations under which you should update statistics:

1) If there is significant change in the key values in the index

2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated

3) Database is upgraded from a previous version Look up SQL Server books online for the following commands: UPDATE

STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP

STATISTICS, sp_autostats, sp_createstats, sp_updatestats

29.What are the different ways of moving data/databases between servers and databases in SQL Server?

There are lots of options available; you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, dettaching and attaching databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data.

30.Explian different types of BACKUPs avaialabe in SQL Server? Given a particular scenario, how would you go about choosing a backup plan?

Types of backups you can create in SQL Sever 7.0+ are Full database backup, differential database backup, transaction log backup, filegroup backup. Check out the BACKUP and RESTORE commands in SQL Server books online. Be prepared to write the commands in your interview. Books online also have information on detailed backup/restore architecture and when one should go for a particular kind of backup.

31.What is database replicaion? What are the different types of replication you can set up in SQL Server?

Replication is the process of copying/moving data between databases on the same or different servers. SQL Server supports the following types of replication scenarios:

Snapshot replication, Transactional replication (with immediate updating subscribers, with queued updating subscribers) , Merge replication

See SQL Server books online for in-depth coverage on replication. Be prepared to explain how different replication agent’s function, what are the main system tables used in replication etc.

32.How to determine the service pack currently installed on SQL Server?

The global variable @@Version stores the build number of the sqlservr.exe, which is used to determine the service pack installed. To know more about this process visit SQL Server service packs and versions.

Database programming

33.What are cursors? Explain different types of cursors. What are the disadvantages of cursors? How can you avoid cursors?

Cursors allow row-by-row prcessing of the resultsets.

Types of cursors: Static, Dynamic, Forward-only, Keyset-driven. See books

online for more information.

Disadvantages of cursors: Each time you fetch a row from the cursor, it results in a network roundtrip, where as a normal SELECT query makes only one rowundtrip, however large the resultset is. Cursors are also costly because they require more resources and temporary storage (results in more IO operations). Further, there are restrictions on the SELECT statements that can be used with some types of cursors. Most of the times, set based operations can be used instead of cursors.

Here is an example:

If you have to give a flat hike to your employees using the following criteria:

Salary between 30000 and 40000 -- 5000 hike

Salary between 40000 and 55000 -- 7000 hike

Salary between 55000 and 65000 -- 9000 hike

In this situation many developers tend to use a cursor, determine each employee's salary and update his salary according to the above formula. But the same can be achieved by multiple update statements or can be combined in a single UPDATE statement as shown below:

UPDATE tbl_emp SET salary =

CASE WHEN salary BETWEEN 30000 AND 40000 THEN salary + 5000

WHEN salary BETWEEN 40000 AND 55000 THEN salary + 7000

WHEN salary BETWEEN 55000 AND 65000 THEN salary + 10000

END

Another situation in which developers tend to use cursors: You need to call a stored procedure when a column in a particular row meets certain condition. You don't have to use cursors for this. This can be achieved using WHILE loop, as long as there is a unique key to identify each row. For examples of using WHILE loop for row-by-row processing, check out the 'My code library' section of my site or search for WHILE.

34.Write down the general syntax for SELECT statements covering all the options.

Here's the basic syntax: (Also checkout SELECT in books online for advanced syntax).

SELECT select_list

[INTO new_table_]

FROM table_source

[WHERE search_condition]

[GROUP BY group_by_expression]

[HAVING search_condition]

[ORDER BY order_expression [ASC | DESC] ]

35.What is a join and explain different types of joins.

Joins are used in queries to explain how different tables are related. Joins also let you select data from a table depending upon data from another table.

Types of joins: INNER JOINs, OUTER JOINs, CROSS JOINs. OUTER JOINs are further classified as LEFT OUTER JOINS, RIGHT OUTER JOINS and FULL OUTER JOINS.

36.Can you have a nested transaction?

Yes, very much. Check out BEGIN TRAN, COMMIT, ROLLBACK, SAVE TRAN and @@TRANCOUNT

37.What is an extended stored procedure? Can you instantiate a COM object by using T-SQL?

An extended stored procedure is a function within a DLL (written in a programming language like C, C++ using Open Data Services (ODS) API) that can be called from T-SQL, just the way we call normal stored procedures using the EXEC statement. See books online to learn how to create extended stored procedures and how to add them to SQL Server.

Yes, you can instantiate a COM (written in languages like VB, VC++) object from T-SQL by using sp_OACreate stored procedure. Also see books online for sp_OAMethod, sp_OAGetProperty, sp_OASetProperty, sp_OADestroy. For an example of creating a COM object in VB and calling it from T-SQL, see 'My code library' section of this site.

38.What is the system function to get the current user's user id?

USER_ID(). Also check out other system functions like USER_NAME(), SYSTEM_USER, SESSION_USER, CURRENT_USER, USER, SUSER_SID(), HOST_NAME().

39.What are triggers? How many triggers you can have on a table? How to invoke a trigger on demand?

Triggers are special kind of stored procedures that get executed automatically when an INSERT, UPDATE or DELETE operation takes place on a table.

In SQL Server 6.5 you could define only 3 triggers per table, one for INSERT, one for UPDATE and one for DELETE. From SQL Server 7.0 onwards, this restriction is gone, and you could create multiple triggers per each action. But in 7.0 there's no way to control the order in which the triggers fire. In SQL Server 2000 you could specify which trigger fires first or fires last using sp_settriggerorder Triggers can't be invoked on demand. They get triggered only when an associated action (INSERT, UPDATE, DELETE) happens on the table on which they are defined.

Triggers are generally used to implement business rules, auditing. Triggers can also be used to extend the referential integrity checks, but wherever possible, use constraints for this purpose, instead of triggers, as constraints are much faster Till SQL Server 7.0, triggers fire only after the data modification operation happens. So in a way, they are called post triggers. But in SQLServer 2000 you could create pre triggers also. Search SQL Server 2000 books online for INSTEAD OF triggers.

Also check out books online for 'inserted table', 'deleted table' and COLUMNS_UPDATED()

There is a trigger defined for INSERT operations on a table, in an OLTP system. The trigger is written to instantiate a COM object and passes the newly inserted rows to it for some custom processing. What do you think of this implementation? Can this be implemented better?

Instantiating COM objects is a time consuming process and since you are doing it from within a trigger, it slows down the data insertion process. Same is the case with sending emails from triggers. This scenario can be better implemented by logging all the necessary data into a separate table, and have a job, which periodically checks this table and does the needful.

40.What is a self-join? Explain it with an example.

Self-join is just like any other join, except that two instances of the same table will be joined in the query. Here is an example: Employees table, which contains rows for normal employees as well as managers. So, to find out the managers of all the employees, you need a self-join.

CREATE TABLE emp (

empid int, mgrid int, empname char(10) )

INSERT emp SELECT 1,2,'Vyas'

INSERT emp SELECT 2,3,'Mohan'

INSERT emp SELECT 3,NULL,'Shobha'

INSERT emp SELECT 4,2,'Shridhar'

INSERT emp SELECT 5,2,'Sourabh'

SELECT t1.empname [Employee], t2.empname [Manager]

FROM emp t1, emp t2

WHERE t1.mgrid = t2.empid

Here's an advanced query using a LEFT OUTER JOIN that even returns the employees without managers (super bosses)

SELECT t1.empname [Employee], COALESCE(t2.empname, 'No manager') [Manager]

FROM emp t1

LEFT OUTER JOIN

emp t2

ON

t1.mgrid = t2.empid

41. Given an employee table, how would you find out the second highest salary?


1. What is a major difference between SQL Server 6.5 and 7.0 platform wise? SQL Server 6.5 runs only on Windows NT Server. SQL Server 7.0 runs on Windows NT Server, workstation and Windows 95/98.

2. Is SQL Server implemented as a service or an application? It is implemented as a service on Windows NT server and workstation and as an application on Windows 95/98.
3. What is the difference in Login Security Modes between v6.5 and 7.0? 7.0 doesn’t have Standard Mode, only Windows NT Integrated mode and Mixed mode that consists of both Windows NT Integrated and SQL Server authentication modes.
4. What is a traditional Network Library for SQL Servers? Named Pipes.
5. What is a default TCP/IP socket assigned for SQL Server? 1433
6. If you encounter this kind of an error message, what you need to look into to solve this problem?

[Microsoft][ODBC SQL Server Driver][Named Pipes]Specified SQL Server not found.

1. Check if MS SQL Server service is running on the computer you are trying to log into
2. Check on Client Configuration utility. Client and Server have to in sync.
6. What is new philosophy for database devises for SQL Server 7.0? There are no devises anymore in SQL Server 7.0. It is file system now.
7. When you create a database how is it stored? It is stored in two separate files: one file contains the data, system tables, other database objects, the other file stores the transaction log.
8. Let’s assume you have data that resides on SQL Server 6.5. You have to move it SQL Server 7.0. How are you going to do it? You have to use transfer command.
9. Do you know how to configure DB2 side of the application? Set up an application ID, create RACF group with tables attached to this group, attach the ID to this RACF group.
10. What kind of LAN types do you know? Ethernet networks and token ring networks.
11. What is the difference between them? With Ethernet, any devices on the network can send data in a packet to any location on the network at any time. With Token Ring, data is transmitted in ‘tokens’ from computer to computer in a ring or star configuration. Token ring speed is 4/16 Mbit/sec , Ethernet - 10/100 Mbit/sec.
12. What protocol both networks use? TCP/IP. Transmission Control Protocol, Internet Protocol.
13. How many bits IP Address consist of?An IP Address is a 32-bit number.
14. How many layers of TCP/IP protocol combined of? Five. (Application, Transport, Internet, Data link, Physical).
15. How do you define testing of network layers? Reviewing with your developers to identify the layers of the Network layered architecture, your Web client and Web server application interact with. Determine the hardware and software configuration dependencies for the application under test.
16. How do you test proper TCP/IP configuration Windows machine? Windows NT: IPCONFIG/ALL, Windows 95: WINIPCFG, Ping or ping ip.add.re.ss


1. Which of the following statements contains an error?
1. SELECT * FROM emp WHERE empid = 493945;
2. SELECT empid FROM emp WHERE empid= 493945;
3. SELECT empid FROM emp;
4. SELECT empid WHERE empid = 56949 AND lastname = ‘SMITH’;
2. Which of the following correctly describes how to specify a column alias?
1. Place the alias at the beginning of the statement to describe the table.
2. Place the alias after each column, separated by white space, to describe the column.
3. Place the alias after each column, separated by a comma, to describe the column.
4. Place the alias at the end of the statement to describe the table.
3. The NVL function
1. Assists in the distribution of output across multiple columns.
2. Allows the user to specify alternate output for non-null column values.
3. Allows the user to specify alternate output for null column values.
4. Nullifies the value of the column output.
4. Output from a table called PLAYS with two columns, PLAY_NAME and AUTHOR, is shown below. Which of the following SQL statements produced it?

PLAY_TABLE
————————————-
“Midsummer Night’s Dream", SHAKESPEARE
“Waiting For Godot", BECKETT
“The Glass Menagerie", WILLIAMS

1. SELECT play_name || author FROM plays;
2. SELECT play_name, author FROM plays;
3. SELECT play_name||’, ‘ || author FROM plays;
4. SELECT play_name||’, ‘ || author PLAY_TABLE FROM plays;
4. Issuing the DEFINE_EDITOR="emacs” will produce which outcome?
1. The emacs editor will become the SQL*Plus default text editor.
2. The emacs editor will start running immediately.
3. The emacs editor will no longer be used by SQL*Plus as the default text editor.
4. The emacs editor will be deleted from the system.
5. The user issues the following statement. What will be displayed if the EMPID selected is 60494?

SELECT DECODE(empid,38475, “Terminated",60494, “LOA", “ACTIVE")
FROM emp;

1. 60494
2. LOA
3. Terminated
4. ACTIVE
6. SELECT (TO_CHAR(NVL(SQRT(59483), “INVALID")) FROM DUAL is a valid SQL statement.
1. TRUE
2. FALSE
7. The appropriate table to use when performing arithmetic calculations on values defined within the SELECT statement (not pulled from a table column) is
1. EMP
2. The table containing the column values
3. DUAL
4. An Oracle-defined table
8. Which of the following is not a group function?
1. avg( )
2. sqrt( )
3. sum( )
4. max( )


10. Once defined, how long will a variable remain so in SQL*Plus?
1. Until the database is shut down
2. Until the instance is shut down
3. Until the statement completes
4. Until the session completes
11. The default character for specifying runtime variables in SELECT statements is
1. Ampersand
2. Ellipses
3. Quotation marks
4. Asterisk
12. A user is setting up a join operation between tables EMP and DEPT. There are some employees in the EMP table that the user wants returned by the query, but the employees are not assigned to departments yet. Which SELECT statement is most appropriate for this user?
1. select e.empid, d.head from emp e, dept d;
2. select e.empid, d.head from emp e, dept d where e.dept# = d.dept#;
3. select e.empid, d.head from emp e, dept d where e.dept# = d.dept# (+);
4. select e.empid, d.head from emp e, dept d where e.dept# (+) = d.dept#;
13. Developer ANJU executes the following statement: CREATE TABLE animals AS SELECT * from MASTER.ANIMALS; What is the effect of this statement?
1. A table named ANIMALS will be created in the MASTER schema with the same data as the ANIMALS table owned by ANJU.
2. A table named ANJU will be created in the ANIMALS schema with the same data as the ANIMALS table owned by MASTER.
3. A table named ANIMALS will be created in the ANJU schema with the same data as the ANIMALS table owned by MASTER.
4. A table named MASTER will be created in the ANIMALS schema with the same data as the ANJU table owned by ANIMALS.
14. User JANKO would like to insert a row into the EMPLOYEE table, which has three columns: EMPID, LASTNAME, and SALARY. The user would like to enter data for EMPID 59694, LASTNAME Harris, but no salary. Which statement would work best?
1. INSERT INTO employee VALUES (59694,’HARRIS’, NULL);
2. INSERT INTO employee VALUES (59694,’HARRIS’);
3. INSERT INTO employee (EMPID, LASTNAME, SALARY) VALUES (59694,’HARRIS’);
4. INSERT INTO employee (SELECT 59694 FROM ‘HARRIS’);
15. Which three of the following are valid database datatypes in Oracle? (Choose three.)
1. CHAR
2. VARCHAR2
3. BOOLEAN
4. NUMBER
16. Omitting the WHERE clause from a DELETE statement has which of the following effects?
1. The delete statement will fail because there are no records to delete.
2. The delete statement will prompt the user to enter criteria for the deletion
3. The delete statement will fail because of syntax error.
4. The delete statement will remove all records from the table.
17. Creating a foreign-key constraint between columns of two tables defined with two different datatypes will produce an error.
1. TRUE
2. FALSE
18. Dropping a table has which of the following effects on a nonunique index created for the table?
1. No effect.
2. The index will be dropped.
3. The index will be rendered invalid.
4. The index will contain NULL values.


19. To increase the number of nullable columns for a table,
1. Use the alter table statement.
2. Ensure that all column values are NULL for all rows.
3. First increase the size of adjacent column datatypes, then add the column.
4. Add the column, populate the column, then add the NOT NULL constraint.
20. Which line of the following statement will produce an error?
1. CREATE TABLE goods
2. (good_no NUMBER,
3. good_name VARCHAR2 check(good_name in (SELECT name FROM avail_goods)),
4. CONSTRAINT pk_goods_01
5. PRIMARY KEY (goodno));
6. There are no errors in this statement.
21. MAXVALUE is a valid parameter for sequence creation.
1. TRUE
2. FALSE
22. Which of the following lines in the SELECT statement below contain an error?
1. SELECT DECODE(empid, 58385, “INACTIVE", “ACTIVE") empid
2. FROM emp
3. WHERE SUBSTR(lastname,1,1) > TO_NUMBER(’S')
4. AND empid > 02000
5. ORDER BY empid DESC, lastname ASC;
6. There are no errors in this statement.
23. Which function below can best be categorized as similar in function to an IF-THEN-ELSE statement?
1. SQRT
2. DECODE
3. NEW_TIME
4. ROWIDTOCHAR
24. Which two of the following orders are used in ORDER BY clauses? (choose two)
1. ABS
2. ASC
3. DESC
4. DISC
25. You query the database with this command

SELECT name
FROM employee
WHERE name LIKE ‘_a%’;

Which names are displayed?

1. Names starting with “a”
2. Names starting with “aR
3. or “A”
4. Names containing “aR
5. as second character
6. Names containing “aR
7. as any letter except the first


Transact-SQL Query
SQL Server Performance Tuning Tips

This tip may sound obvious to most of you, but I have seen professional developers, in two major SQL Server-based applications used worldwide, not follow it. And that is to always include a WHERE clause in your SELECT statement to narrow the number of rows returned. If you don't use a WHERE clause, then SQL Server will perform a table scan of your table and return all of the rows. In some case you may want to return all rows, and not using a WHERE clause is appropriate in this case. But if you don't need all the rows returned, use a WHERE clause to limit the number of rows returned.

By returning data you don't need, you are causing SQL Server to perform I/O it doesn't need to perform, wasting SQL Server resources. In addition, it increases network traffic, which can also lead to reduced performance. And if the table is very large, a table scan will lock the table during the time-consuming scan, preventing other users from accessing it, hurting concurrency.

Another negative aspect of a table scan is that it will tend to flush out data pages from the cache with useless data, which reduces SQL Server's ability to reuse useful data in the cache, which increases disk I/O and hurts performance. [6.5, 7.0, 2000] Updated 4-17-2003

*****

To help identify long running queries, use the SQL Server Profiler Create Trace Wizard to run the "TSQL By Duration" trace. You can specify the length of the long running queries you are trying to identify (such as over 1000 milliseconds), and then have these recorded in a log for you to investigate later. [7.0]

*****

When using the UNION statement, keep in mind that, by default, it performs the equivalent of a SELECT DISTINCT on the final result set. In other words, UNION takes the results of two like recordsets, combines them, and then performs a SELECT DISTINCT in order to eliminate any duplicate rows. This process occurs even if there are no duplicate records in the final recordset. If you know that there are duplicate records, and this presents a problem for your application, then by all means use the UNION statement to eliminate the duplicate rows.

On the other hand, if you know that there will never be any duplicate rows, or if there are, and this presents no problem to your application, then you should use the UNION ALL statement instead of the UNION statement. The advantage of the UNION ALL is that is does not perform the SELECT DISTINCT function, which saves a lot of unnecessary SQL Server resources from being using. [6.5, 7.0, 2000] Updated 10-30-2003

*****

Sometimes you might want to merge two or more sets of data resulting from two or more queries using UNION. For example:

SELECT column_name1, column_name2
FROM table_name1
WHERE column_name1 = some_value
UNION
SELECT column_name1, column_name2
FROM table_name1
WHERE column_name2 = some_value

This same query can be rewritten, like the following example, and when doing so, performance will be boosted:

SELECT DISTINCT column_name1, column_name2
FROM table_name1
WHERE column_name1 = some_value OR column_name2 = some_value

And if you can assume that neither criteria will return duplicate rows, you can even further boost the performance of this query by removing the DISTINCT clause. [6.5, 7.0, 2000] Added 6-5-2003

*****

Carefully evaluate whether your SELECT query needs the DISTINCT clause or not. Some developers automatically add this clause to every one of their SELECT statements, even when it is not necessary. This is a bad habit that should be stopped.

The DISTINCT clause should only be used in SELECT statements if you know that duplicate returned rows are a possibility, and that having duplicate rows in the result set would cause problems with your application.

The DISTINCT clause creates a lot of extra work for SQL Server, and reduces the physical resources that other SQL statements have at their disposal. Because of this, only use the DISTINCT clause if it is necessary. [6.5, 7.0, 2000] Updated 10-30-2003

*****

In your queries, don't return column data you don't need. For example, you should not use SELECT * to return all the columns from a table if you don't need all the data from each column. In addition, using SELECT * prevents the use of covered indexes, further potentially hurting query performance. [6.5, 7.0, 2000] Updated 6-21-2004

*****

If your application allows users to run queries, but you are unable in your application to easily prevent users from returning hundreds, even thousands of unnecessary rows of data they don't need, consider using the TOP operator within the SELECT statement. This way, you can limit how may rows are returned, even if the user doesn't enter any criteria to help reduce the number or rows returned to the client. For example, the statement:

SELECT TOP 100 fname, lname FROM customers
WHERE state = 'mo'

limits the results to the first 100 rows returned, even if 10,000 rows actually meet the criteria of the WHERE clause. When the specified number of rows is reached, all processing on the query stops, potentially saving SQL Server overhead, and boosting performance.

The TOP operator works by allowing you to specify a specific number of rows to be returned, like the example above, or by specifying a percentage value, like this:

SELECT TOP 10 PERCENT fname, lname FROM customers
WHERE state = 'mo'

In the above example, only 10 percent of the available rows would be returned.

Keep in mind that using this option may prevent the user from getting the data they need. For example, the data the are looking for may be in record 101, but they only get to see the first 100 records. Because of this, use this option with discretion. [7.0, 2000] Updated 10-30-2003

*****

You may have heard of a SET command called SET ROWCOUNT. Like the TOP operator, it is designed to limit how many rows are returned from a SELECT statement. In effect, the SET ROWCOUNT and the TOP operator perform the same function.

While is most cases, using either option works equally efficiently, there are some instances (such as rows returned from an unsorted heap) where the TOP operator is more efficient than using SET ROWCOUNT. Because of this, using the TOP operator is preferable to using SET ROWCOUNT to limit the number of rows returned by a query. [6.5, 7.0, 2000] Updated 10-30-2003

*****

In a WHERE clause, the various operators used directly affect how fast a query is run. This is because some operators lend themselves to speed over other operators. Of course, you may not have any choice of which operator you use in your WHERE clauses, but sometimes you do.

Here are the key operators used in the WHERE clause, ordered by their performance. Those operators at the top will produce results faster than those listed at the bottom.

· =

· >, >=, <, <=

· LIKE

· <>

This lesson here is to use = as much as possible, and <> as least as possible. [6.5, 7.0, 2000] Added 5-30-2003

*****

In a WHERE clause, the various operands used directly affect how fast a query is run. This is because some operands lend themselves to speed over other operands. Of course, you may not have any choice of which operand you use in your WHERE clauses, but sometimes you do.

Here are the key operands used in the WHERE clause, ordered by their performance. Those operands at the top will produce results faster than those listed at the bottom.

· A single literal used by itself on one side of an operator

· A single column name used by itself on one side of an operator, a single parameter used by itself on one side of an operator

· A multi-operand expression on one side of an operator

· A single exact number on one side of an operator

· Other numeric number (other than exact), date and time

· Character data, Nulls

The simpler the operand, and using exact numbers, provides the best overall performance. [6.5, 7.0, 2000] Added 5-30-2003

*****

If a WHERE clause includes multiple expressions, there is generally no performance benefit gained by ordering the various expressions in any particular order. This is because the SQL Server Query Optimizer does this for you, saving you the effort. There are a few exceptions to this, which are discussed on this web site[7.0, 2000] Added 5-30-2003

*****

Don't include code that doesn't do anything. This may sound obvious, but I have seen this in some off-the-shelf SQL Server-based applications. For example, you may see code like this:

SELECT column_name FROM table_name
WHERE 1 = 0

When this query is run, no rows will be returned. Obviously, this is a simple example (and most of the cases where I have seen this done have been very long queries), a query like this (or part of a larger query) like this doesn't perform anything useful, and shouldn't be run. It is just wasting SQL Server resources. In addition, I have seen more than one case where such dead code actually causes SQL Server to through errors, preventing the code from even running. [6.5, 7.0, 2000] Added 5-30-2003

*****

By default, some developers, especially those who have not worked with SQL Server before, routinely include code similar to this in their WHERE clauses when they make string comparisons:

SELECT column_name FROM table_name
WHERE LOWER(column_name) = 'name'

In other words, these developers are making the assuming that the data in SQL Server is case-sensitive, which it generally is not. If your SQL Server database is not configured to be case sensitive, you don't need to use LOWER or UPPER to force the case of text to be equal for a comparison to be performed. Just leave these functions out of your code. This will speed up the performance of your query, as any use of text functions in a WHERE clause hurts performance.

But what if your database has been configured to be case-sensitive? Should you then use the LOWER and UPPER functions to ensure that comparisons are properly compared? No. The above example is still poor coding. If you have to deal with ensuring case is consistent for proper comparisons, use the technique described below, along with appropriate indexes on the column in question:

SELECT column_name FROM table_name
WHERE column_name = 'NAME' or column_name = 'name'

This code will run much faster than the first example. [6.5, 7.0, 2000] Added 5-30-2003

*****

Try to avoid WHERE clauses that are non-sargable. The term "sargable" (which is in effect a made-up word) comes from the pseudo-acronym "SARG", which stands for "Search ARGument," which refers to a WHERE clause that compares a column to a constant value. If a WHERE clause is sargable, this means that it can take advantage of an index (assuming one is available) to speed completion of the query. If a WHERE clause is non-sargable, this means that the WHERE clause (or at least part of it) cannot take advantage of an index, instead performing a table/index scan, which may cause the query's performance to suffer.

Non-sargable search arguments in the WHERE clause, such as "IS NULL", "<>", "!=", "!>", "!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE", and "LIKE '%500'" generally prevents (but not always) the query optimizer from using an index to perform a search. In addition, expressions that include a function on a column, expressions that have the same column on both sides of the operator, or comparisons against a column (not a constant), are not sargable.

But not every WHERE clause that has a non-sargable expression in it is doomed to a table/index scan. If the WHERE clause includes both sargable and non-sargable clauses, then at least the sargable clauses can use an index (if one exists) to help access the data quickly.

In many cases, if there is a covering index on the table, which includes all of the columns in the SELECT, JOIN, and WHERE clauses in a query, then the covering index can be used instead of a table/index scan to return a query's data, even if it has a non-sargable WHERE clause. But keep in mind that covering indexes have their own drawbacks, such as producing very wide indexes that increase disk I/O when they are read.

In some cases, it may be possible to rewrite a non-sargable WHERE clause into one that is sargable. For example, the clause:

WHERE SUBSTRING(firstname,1,1) = 'm'

can be rewritten like this:

WHERE firstname like 'm%'

Both of these WHERE clauses produce the same result, but the first one is non-sargable (it uses a function) and will run slow, while the second one is sargable, and will run much faster.

WHERE clauses that perform some function on a column are non-sargable. On the other hand, if you can rewrite the WHERE clause so that the column and function are separate, then the query can use an available index, greatly boosting performance. for example:

Function Acts Directly on Column, and Index Cannot Be Used:

SELECT member_number, first_name, last_name
FROM members
WHERE DATEDIFF(yy,datofbirth,GETDATE()) > 21

Function Has Been Separated From Column, and an Index Can Be Used:

SELECT member_number, first_name, last_name
FROM members
WHERE dateofbirth <>

Each of the above queries produces the same results, but the second query will use an index because the function is not performed directly on the column, as it is in the first example. The moral of this story is to try to rewrite WHERE clauses that have functions so that the function does not act directly on the column.

WHERE clauses that use NOT are not sargable, but can often be rewritten to remove the NOT from the WHERE clause, for example:

WHERE NOT column_name > 5

to

WERE column_name <= 5

Each of the above clauses produce the same results, but the second one is sargable.

If you don't know if a particular WHERE clause is sargable or non-sargable, check out the query's execution plan in Query Analyzer. Doing this, you can very quickly see if the query will be using index lookups or table/index scans to return your results.

With some careful analysis, and some clever thought, many non-sargable queries can be written so that they are sargable. Your goal for best performance (assuming it is possible) is to get the left side of a search condition to be a single column name, and the right side an easy to look up value. [6.5, 7.0, 2000] Updated 6-2-2003

*****

If you run into a situation where a WHERE clause is not sargable because of the use of a function on the right side of an equality sign (and there is no other way to rewrite the WHERE clause), consider creating an index on a computed column instead. This way, you avoid the non-sargable WHERE clause altogether, using the results of the function in your WHERE clause instead.

Because of the additional overhead required for indexes on computed columns, you will only want to do this if you need to run this same query over and over in your application, thereby justifying the overhead of the indexed computed column. [2000] Updated 6-21-2004

*****

If you currently have a query that uses NOT IN, which offers poor performance because the SQL Server optimizer has to use a nested table scan to perform this activity, instead try to use one of the following options instead, all of which offer better performance:

· Use EXISTS or NOT EXISTS

· Use IN

· Perform a LEFT OUTER JOIN and check for a NULL condition

[6.5, 7.0, 2000] Updated 10-30-2003

*****

When you have a choice of using the IN or the EXISTS clause in your Transact-SQL, you will generally want to use the EXISTS clause, as it is usually more efficient and performs faster. [6.5, 7.0, 2000] Updated 10-30-2003

*****

If you find that SQL Server uses a TABLE SCAN instead of an INDEX SEEK when you use an IN or OR clause as part of your WHERE clause, even when those columns are covered by an index, consider using an index hint to force the Query Optimizer to use the index.

For example:

SELECT * FROM tblTaskProcesses WHERE nextprocess = 1 AND processid IN (8,32,45)

takes about 3 seconds, while:

SELECT * FROM tblTaskProcesses (INDEX = IX_ProcessID) WHERE nextprocess = 1 AND processid IN (8,32,45)

returns in under a second. [7.0, 2000] Updated 6-21-2004 Contributed by David Ames

*****

If you use LIKE in your WHERE clause, try to use one or more leading character in the clause, if at all possible. For example, use:

LIKE 'm%'

not:

LIKE '%m'

If you use a leading character in your LIKE clause, then the Query Optimizer has the ability to potentially use an index to perform the query, speeding performance and reducing the load on SQL Server.

But if the leading character in a LIKE clause is a wildcard, the Query Optimizer will not be able to use an index, and a table scan must be run, reducing performance and taking more time.

The more leading characters you can use in the LIKE clause, the more likely the Query Optimizer will find and use a suitable index. [6.5, 7.0, 2000] Updated 10-30-2003

*****

If your application needs to retrieve summary data often, but you don't want to have the overhead of calculating it on the fly every time it is needed, consider using a trigger that updates summary values after each transaction into a summary table.

While the trigger has some overhead, overall, it may be less that having to calculate the data every time the summary data is needed. You may have to experiment to see which methods is fastest for your environment. [6.5, 7.0, 2000] Updated 10-30-2003

*****

If your application needs to insert a large binary value into an image data column, perform this task using a stored procedure, not using an INSERT statement embedded in your application.

The reason for this is because the application must first convert the binary value into a character string (which doubles its size, thus increasing network traffic and taking more time) before it can be sent to the server. And when the server receives the character string, it then has to convert it back to the binary format (taking even more time).

Using a stored procedure avoids all this because all the activity occurs on the SQL Server, and little data is transmitted over the network. [6.5, 7.0, 2000] Updated 10-30-2003

*****

When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, as it is much more efficient. For example:

SELECT customer_number, customer_name
FROM customer
WHERE customer_number in (1000, 1001, 1002, 1003, 1004)

is much less efficient than this:

SELECT customer_number, customer_name
FROM customer
WHERE customer_number BETWEEN 1000 and 1004

Assuming there is a useful index on customer_number, the Query Optimizer can locate a range of numbers much faster (using BETWEEN) than it can find a series of numbers using the IN clause (which is really just another form of the OR clause). [6.5, 7.0, 2000] Updated 10-30-2003

*****

If possible, try to avoid using the SUBSTRING function in your WHERE clauses. Depending on how it is constructed, using the SUBSTRING function can force a table scan instead of allowing the optimizer to use an index (assuming there is one). If the substring you are searching for does not include the first character of the column you are searching for, then a table scan is performed.

If possible, you should avoid using the SUBSTRING function and use the LIKE condition instead, for better performance.

Instead of doing this:

WHERE SUBSTRING(column_name,1,1) = 'b'

Try using this instead:

WHERE column_name LIKE 'b%'

If you decide to make this choice, keep in mind that you will want your LIKE condition to be sargable, which means that you cannot place a wildcard in the first position. [6.5, 7.0, 2000] Updated 6-4-2003

*****

Where possible, avoid string concatenation in your Transact-SQL code, as it is not a fast process, contributing to overall slower performance of your application. [6.5, 7.0, 2000] Updated 10-30-2003

*****

Generally, avoid using optimizer hints in your queries. This is because it is generally very hard to outguess the Query Optimizer. Optimizer hints are special keywords that you include with your query to force how the Query Optimizer runs. If you decide to include a hint in a query, this forces the Query Optimizer to become static, preventing the Query Optimizer from dynamically adapting to the current environment for the given query. More often than not, this hurts, not helps performance.

If you think that a hint might be necessary to optimize your query, be sure you first do all of the following first:

· Update the statistics on the relevant tables.

· If the problem query is inside a stored procedure, recompile it.

· Review the search arguments to see if they are sargable, and if not, try to rewrite them so that they are sargable.

· Review the current indexes, and make changes if necessary.

If you have done all of the above, and the query is not running as you expect, then you may want to consider using an appropriate optimizer hint.

If you haven't heeded my advice and have decided to use some hints, keep in mind that as your data changes, and as the Query Optimizer changes (through service packs and new releases of SQL Server), your hard-coded hints may no longer offer the benefits they once did. So if you use hints, you need to periodically review them to see if they are still performing as expected. [6.5, 7.0, 2000] Updated 6-21-2004

*****

If you have a WHERE clause that includes expressions connected by two or more AND operators, SQL Server will evaluate them from left to right in the order they are written. This assumes that no parenthesis have been used to change the order of execution. Because of this, you may want to consider one of the following when using AND:

· Locate the least likely true AND expression first. This way, if the AND expression is false, the clause will end immediately, saving time.

· If both parts of an AND expression are equally likely being false, put the least complex AND expression first. This way, if it is false, less work will have to be done to evaluate the expression.

You may want to consider using Query Analyzer to look at the execution plans of your queries to see which is best for your situation. [6.5, 7.0, 2000] Updated 6-21-2004

*****

If you want to boost the performance of a query that includes an AND operator in the WHERE clause, consider the following:

· Of the search criterions in the WHERE clause, at least one of them should be based on a highly selective column that has an index.

· If at least one of the search criterions in the WHERE clause is not highly selective, consider adding indexes to all of the columns referenced in the WHERE clause.

· If none of the column in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.

[7.0, 2000] Updated 2-8-2002

*****

The Query Optimizer will perform a table scan or a clustered index scan on a table if the WHERE clause in the query contains an OR operator and if any of the referenced columns in the OR clause are not indexed (or does not have a useful index). Because of this, if you use many queries with OR clauses, you will want to ensure that each referenced column in the WHERE clause has a useful index. [7.0, 2000] Added 10-17-2000

*****

A query with one or more OR clauses can sometimes be rewritten as a series of queries that are combined with a UNION ALL statement, in order to boost the performance of the query. For example, let's take a look at the following query:

SELECT employeeID, firstname, lastname
FROM names
WHERE dept = 'prod' or city = 'Orlando' or division = 'food'

This query has three separate conditions in the WHERE clause. In order for this query to use an index, then there must be an index on all three columns found in the WHERE clause.

This same query can be written using UNION ALL instead of OR, like this example:

SELECT employeeID, firstname, lastname FROM names WHERE dept = 'prod'
UNION ALL
SELECT employeeID, firstname, lastname FROM names WHERE city = 'Orlando'
UNION ALL
SELECT employeeID, firstname, lastname FROM names WHERE division = 'food'

Each of these queries will produce the same results. If there is only an index on dept, but not the other columns in the WHERE clause, then the first version will not use any index and a table scan must be performed. But in the second version of the query will use the index for part of the query, but not for all of the query.

Admittedly, this is a very simple example, but even so, it does demonstrate how rewriting a query can affect whether or not an index is used or not. If this query was much more complex, then the approach of using UNION ALL might be must more efficient, as it allows you to tune each part of the index individually, something that cannot be done if you use only ORs in your query.

Note, that I am using UNION ALL instead of UNION. The reason for this is to prevent the UNION statement from trying to sort the data and remove duplicates, which hurts performance. Of course, if there is the possibility of duplicates, and you want to remove them, then of course you can use just UNION.

If you have a query that uses ORs and it not making the best use of indexes, consider rewriting it as a UNION ALL, and then testing performance. Only through testing can you be sure that one version of your query will be faster than another. [7.0, 2000] Added 2-8-2002

*****

Don't use ORDER BY in your SELECT statements unless you really need to, as it adds a lot of extra overhead. For example, perhaps it may be more efficient to sort the data at the client than at the server. In other cases, perhaps the client doesn't even need sorted data to achieve its goal. The key here is to remember that you shouldn't automatically sort data, unless you know it is necessary. [6.5, 7.0, 2000] Updated 6-4-2003

*****

Whenever SQL Server has to perform a sorting operation, additional resources have to be used to perform this task. Sorting often occurs when any of the following Transact-SQL statements are executed:

· ORDER BY

· GROUP BY

· SELECT DISTINCT

· UNION

· CREATE INDEX (generally not as critical as happens much less often)

In many cases, these commands cannot be avoided. On the other hand, there are few ways that sorting overhead can be reduced. These include:

· Keep the number of rows to be sorted to a minimum. Do this by only returning those rows that absolutely need to be sorted.

· Keep the number of columns to be sorted to the minimum. In other words, don't sort more columns that required.

· Keep the width (physical size) of the columns to be sorted to a minimum.

· Sort column with number datatypes instead of character datatypes.

When using any of the above Transact-SQL commands, try to keep the above performance-boosting suggestions in mind. [6.5, 7.0, 2000] Added 6-5-2003

*****

If you have to sort by a particular column often, consider making that column a clustered index. This is because the data is already presorted for you and SQL Server is smart enough not to resort the data. [6.5, 7.0, 2000] Added 6-5-2003

*****

If your SELECT statement includes an IN operator along with a list of values to be tested in the query, order the list of values so that the most frequently found values are placed at the first of the list, and the less frequently found values are placed at the end of the list. This can speed performance because the IN option returns true as soon as any of the values in the list produce a match. The sooner the match is made, the faster the query completes. [6.5, 7.0, 2000] Added 11-27-2000

*****

If you need to use the SELECT INTO option, keep in mind that it can lock system tables, preventing others users from accessing the data they need. If you do need to use SELECT INTO, try to schedule it when your SQL Server is less busy, and try to keep the amount of data inserted to a minimum. [6.5, 7.0, 2000] Added 11-28-2000

*****

If your SELECT statement contains a HAVING clause, write your query so that the WHERE clause does most of the work (removing undesired rows) instead of the HAVING clause do the work of removing undesired rows. Using the WHERE clause appropriately can eliminate unnecessary rows before they get to the GROUP BY and HAVING clause, saving some unnecessary work, and boosting performance.

For example, in a SELECT statement with WHERE, GROUP BY, and HAVING clauses, here's what happens. First, the WHERE clause is used to select the appropriate rows that need to be grouped. Next, the GROUP BY clause divides the rows into sets of grouped rows, and then aggregates their values. And last, the HAVING clause then eliminates undesired aggregated groups. If the WHERE clause is used to eliminate as many of the undesired rows as possible, this means the GROUP BY and the HAVING clauses will have less work to do, boosting the overall performance of the query. [6.5, 7.0, 2000] Added 12-11-2000

*****

If your application performs many wildcard (LIKE %) text searches on CHAR or VARCHAR columns, consider using SQL Server's full-text search option. The Search Service can significantly speed up wildcard searches of text stored in a database. [7.0, 2000] Updated 1-12-2001

*****

The GROUP BY clause can be used with or without an aggregate function. But if you want optimum performance, don't use the GROUP BY clause without an aggregate function. This is because you can accomplish the same end result by using the DISTINCT option instead, and it is faster.

For example, you could write your query two different ways:

USE Northwind
SELECT OrderID
FROM [Order Details]
WHERE UnitPrice > 10
GROUP BY OrderID

or

USE Northwind
SELECT DISTINCT OrderID
FROM [Order Details]
WHERE UnitPrice > 10

Both of the above queries produce the same results, but the second one will use less resources and perform faster. [6.5, 7.0, 2000] Added 1-12-2001

*****

The GROUP BY clause can be sped up if you follow these suggestion:

· Keep the number of rows returned by the query as small as possible.

· Keep the number of groupings as few as possible.

· Don't group redundant columns.

· If there is a JOIN in the same SELECT statement that has a GROUP BY, try to rewrite the query to use a subquery instead of using a JOIN. If this is possible, performance will be faster. If you have to use a JOIN, try to make the GROUP BY column from the same table as the column or columns on which the set function is used.

· Consider adding an ORDER BY clause to the SELECT statement that orders by the same column as the GROUP BY. This may cause the GROUP BY to perform faster. Test this to see if is true in your particular situation.

[7.0, 2000] Added 6-6-2003

*****

Sometimes perception is more important that reality. For example, which of the following two queries is the fastest:

· A query that takes 30 seconds to run, and then displays all of the required results.

· A query that takes 60 seconds to run, but displays the first screen full of records in less than 1 second.

Most DBAs would choose the first option as it takes less server resources and performs faster. But from many user's point-of-view, the second one may be more palatable. By getting immediate feedback, the user gets the impression that the application is fast, even though in the background, it is not.

If you run into situations where perception is more important than raw performance, consider using the FAST query hint. The FAST query hint is used with the SELECT statement using this form:

OPTION(FAST number_of_rows)

where number_of_rows is the number of rows that are to be displayed as fast as possible.

When this hint is added to a SELECT statement, it tells the Query Optimizer to return the specified number of rows as fast as possible, without regard to how long it will take to perform the overall query. Before rolling out an application using this hint, I would suggest you test it thoroughly to see that it performs as you expect. You may find out that the query may take about the same amount of time whether the hint is used or not. If this the case, then don't use the hint. [7.0, 2000] Added 3-6-2001

*****

Instead of using temporary tables, consider using a derived table instead. A derived table is the result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, we can reduce I/O and boost our application's performance. [7.0, 2000] Added 3-9-2001 More info on derived tables.

*****

SQL Server 2000 offers a new data type called "table." Its main purpose is for the temporary storage of a set of rows. A variable, of type "table," behaves as if it is a local variable. And like local variables, it has a limited scope, which is within the batch, function, or stored procedure in which it was declared. In most cases, a table variable can be used like a normal table. SELECTs, INSERTs, UPDATEs, and DELETEs can all be made against a table variable.

For best performance, if you need a temporary table in your Transact-SQL code, try to use a table variable instead of creating a conventional temporary table instead. Table variables are created and manipulated in memory instead of the tempdb database, making them much faster. In addition, table variables found in stored procedures result in fewer compilations (than when using temporary tables), and transactions using table variables only last as long as the duration of an update on the table variable, requiring less locking and logging resources. [2000] Added 8-7-2001

*****

It is fairly common request to write a Transact-SQL query to to compare a parent table and a child table and find out if there are any parent records that don't have a match in the child table. Generally, there are three ways this can be done:

Using a NOT EXISTS

SELECT a.hdr_key
FROM hdr_tbl a
WHERE NOT EXISTS (SELECT * FROM dtl_tbl b WHERE a.hdr_key = b.hdr_key)

Using a Left Join

SELECT a.hdr_key
FROM hdr_tbl a
LEFT JOIN dtl_tbl b ON a.hdr_key = b.hdr_key
WHERE b.hdr_key IS NULL

Using a NOT IN

SELECT hdr_key
FROM hdr_tbl
WHERE hdr_key NOT IN (SELECT hdr_key FROM dtl_tbl)

In each case, the above query will return identical results. But, which of these three variations of the same query produces the best performance? Assuming everything else is equal, the best performing version through the worst performing version will be from top to bottom, as displayed above. In other words, the NOT EXISTS variation of this query is generally the most efficient.

I say generally, because the indexes found on the tables, along with the number of rows in each table, can influence the results. If you are not sure which variation to try yourself, you can try them all and see which produces the best results in your particular circumstances. [7.0, 2000] Added 3-29-2002

*****

Be careful when using OR in your WHERE clause, it is fairly simple to accidentally retrieve much more data than you need, which hurts performance. For example, take a look at the query below:

SELECT companyid, plantid, formulaid
FROM batchrecords
WHERE companyid = '0001' and plantid = '0202' and formulaid = '39988773'
OR
companyid = '0001' and plantid = '0202'

As you can see from this query, the WHERE clause is redundant, as:

companyid = '0001' and plantid = '0202' and formulaid = '39988773'

is a subset of:

companyid = '0001' and plantid = '0202'

In other words, this query is redundant. Unfortuantely, the SQL Server Query Optimizer isn't smart enough to know this, and will do exactly what you tell it to. What will happen is that SQL Server will have to retrieve all the data you have requested, then in effect do a SELECT DISTINCT to remove redundant rows it unnecessarily finds.

In this case, if you drop this code from the query:

OR
companyid = '0001' and plantid = '0202'

then run the query, you will receive the same results, but with much faster performance. [6.5, 7.0, 2000] Added 5-22-2002

*****

If you need to verify the existence of a record in a table, don't use SELECT COUNT(*) in your Transact-SQL code to identify it, which is very inefficient and wastes server resources. Instead, use the Transact-SQL IF EXITS to determine if the record in question exits, which is much more efficient. For example:

Here's how you might use COUNT(*):

IF (SELECT COUNT(*) FROM table_name WHERE column_name = 'xxx')

Here's a faster way, using IF EXISTS:

IF EXISTS (SELECT * FROM table_name WHERE column_name = 'xxx')

The reason IF EXISTS is faster than COUNT(*) is because the query can end immediately when the text is proven true, while COUNT(*) must count go through every record, whether there is only one, or thousands, before it can be found to be true. [7.0, 2000] Updated 4-1-2003

*****

Let's say that you often need to INSERT the same value into a column. For example, perhaps you have to perform 100,000 INSERTs a day into a particular table, and that 90% of the time the data INSERTed into one of the columns of the table is the same value.

If this the case, you can reduce network traffic (along with some SQL Server overhead) by creating this particular column with a default value of the most common value. This way, when you INSERT your data, and the data is the default value, you don't INSERT any data into this column, instead allowing the default value to automatically be filled in for you. But when the value needs to be different, you will of course INSERT that value into the column. [6.5, 7.0, 2000] Added 7-2-2003

*****

Performing UPDATES takes extra resources for SQL Server to perform. When performing an UPDATE, try to do as many of the following recommendations as you can in order to reduce the amount of resources required to perform an UPDATE. The more of the following suggestions you can do, the faster the UPDATE will perform.

· If you are UPDATing a column of a row that has an unique index, try to only update one row at a time.

· Try not to change the value of a column that is also the primary key.

· When updating VARCHAR columns, try to replace the contents with contents of the same length.

· Try to minimize the UPDATing of tables that have UPDATE triggers.

· Try to avoid UPDATing columns that will be replicated to other databases.

· Try to avoid UPDATing heavily indexed columns.

· Try to avoid UPDATing a column that has a reference in the WHERE clause to the column being updated

To get the most out of connection pooling in ADO.NET, keep the following in mind when developing your applications:

* Be sure than your connections use the same connection string each time. Connection pooling only works if the connection string is the same. If the connection string is different, then a new connection will be opened.
* Only open a connection when you need it, not before.
* Close your connection as soon as you are done using it.
* Don't leave a connection open if it is not being used.
* Be sure to drop any temporary objects before closing a connection.
* Be sure to close any user-defined transactions before closing a connection.
* Don't use application roles if you want to take advantage of connection pooling.

To find nth max salary

CREATE PROC nth (@table_name sysname,

@column_name sysname,

@nth int )

AS

BEGIN

SET @table_name = RTRIM(@table_name)

SET @column_name = RTRIM(@column_name)

DECLARE @exec_str CHAR(400)

IF (SELECT OBJECT_ID(@table_name,'U')) IS NULL

BEGIN

RAISERROR('Invalid table name',18,1)

RETURN -1

END

IF NOT EXISTS(SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = @table_name AND COLUMN_NAME = @column_name)

BEGIN

RAISERROR('Invalid column name',18,1)

RETURN -1

END

IF @nth <= 0

BEGIN

RAISERROR('nth highest number should be greater than Zero',18,1)

RETURN -1

END

SET @exec_str = 'SELECT MAX(' + @column_name + ') from ' + @table_name + ' WHERE ' + @column_name + ' NOT IN ( SELECT TOP ' + LTRIM(STR(@nth - 1)) + ' ' + @column_name + ' FROM ' + @table_name + ' ORDER BY ' + @column_name + ' DESC )'

EXEC (@exec_str)

END





Stored procedure to generate a simple or complex random password

This procedure generates random passwords using RAND() function. It can be configured to generate a simple or a complex password. You can also customize the length of the password generated. Complex passwords will include upper and lower case letters, numbers and special characters. See the code to realize how useful the RAND() function is! When you choose to generate a simple password (default behavior), SPECIAL CARE is taken to generate meaningful/easy to remember passwords.



CREATE PROC random_password

(

@len int = 8, --Length of the password to be generated

@password_type char(7) = 'simple'

--Default is to generate a simple password with lowecase letters.

--Pass anything other than 'simple' to generate a complex password.

--The complex password includes numbers, special characters, upper case and lower case letters

)

AS

To generate a simple password with a length of 8 characters:

EXEC random_password



To generate a simple password with 6 characters:

EXEC random_password 6



To generate a complex password with 8 characters:

EXEC random_password @Password_type = 'complex'



To generate a comples password with 6 characters:

EXEC random_password 6, 'complex'

***************************************************************/

BEGIN

DECLARE @password varchar(25), @type tinyint, @bitmap char(6)

SET @password=''

SET @bitmap = 'uaeioy'

--@bitmap contains all the vowels, which are a, e, i, o, u and y. These vowels are used to generate slightly readable/rememberable simple passwords



WHILE @len > 0

BEGIN

IF @password_type = 'simple' --Generating a simple password

BEGIN

IF (@len%2) = 0 --Appending a random vowel to @password



SET @password = @password + SUBSTRING(@bitmap,CONVERT(int,ROUND(1 + (RAND() * (5)),0)),1)

ELSE --Appending a random alphabet

SET @password = @password + CHAR(ROUND(97 + (RAND() * (25)),0))



END

ELSE --Generating a complex password

BEGIN

SET @type = ROUND(1 + (RAND() * (3)),0)



IF @type = 1 --Appending a random lower case alphabet to @password

SET @password = @password + CHAR(ROUND(97 + (RAND() * (25)),0))

ELSE IF @type = 2 --Appending a random upper case alphabet to @password

SET @password = @password + CHAR(ROUND(65 + (RAND() * (25)),0))

ELSE IF @type = 3 --Appending a random number between 0 and 9 to @password

SET @password = @password + CHAR(ROUND(48 + (RAND() * (9)),0))

ELSE IF @type = 4 --Appending a random special character to @password

SET @password = @password + CHAR(ROUND(33 + (RAND() * (13)),0))

END



SET @len = @len - 1

END



SELECT @password --Here's the result



END

SQL Server administration best practices
This article explains best practices for system administration in a Microsoft SQL Server 7.0 / 2000 environment, including regular maintenance tasks.

No industry today can do away without engaging in a working and efficient data protection plan. Data being the life and blood of any enterprise, protecting it becomes an inevitable task. All it needs for corporate data to be safe and secure is - a sound and wise investment in a backup and restore strategy and its implementation. If an organization considers data important, then it must focus on data protection and be willing to bear the costs associated with it. The elements of cost for such a strategy include:

* Time invested in Planning
* Trained personnel
* Backup and restore hardware/media
* Backup and restore software
* Scheduled testing and validation of recovery plans

SQL Server utilizes a structure called a backup device to manage backups. These are logical names that point to physical files on the local hard disk or a network share. The backup devices allowed by SQL Server are tape, disk, and pipe (Note: Backups can also be written to and restored from physical files directly, without creating backup devices).

After all the introduction of why and how of backups, let's get to the core basics. We all know that backups are a must and they are the crux of an enterprise that cares about their data. Let me quote from an article I once read:

"If a DBA maintains proper backups and can guarantee recovery of data up to the point required by the business process, they have done the job they were hired for. A solid backup plan is the first thing a DBA is required to do. If a DBA does absolutely nothing else in your company, he/she has earned their money by providing a solid backup plan and protecting your data. Every other activity is a simple bonus on top of this."

The planning and implementation of backup and recovery plans, the steps involved and guidelines are discussed under the following sections:

* Creating a solid backup plan
* Determine where to store backups
* Things to keep in mind
* Determine when to backup databases
* Restoring databases
* Creating a solid disaster recovery plan
* Tips to save your life as a DBA

Creating a solid backup plan
Nothing strikes fear into the hearts of managers, users and administrators than a server crashing and data becoming corrupted. A few short years ago, this would have been relatively important. But nowadays, it is absolutely vital and is one of the most serious things that can happen to a company. This is because, the data contained in the databases represents the competitive advantage of a company and its entire lifeblood. Losing key data can be catastrophic to a company.

Therefore, it is absolutely imperative that the DBA constructs a solid and reliable backup strategy so that in the event of a disaster the data can be recovered. The main questions that need to be answered while coming up with a backup strategy are:

1. How frequently does the data change in our system?

2. What is the downtime allowed for the production server in case of failure?

3. For determining the maintenance window, what is the time of the day, week and month when the database server is likely to have minimum activity - i.e. minimum updates?

4. How much is the enterprise willing to invest in data backup strategies? This is directly answered by addressing the question - how crucial is the data? Is it mission critical?

5. How much data loss is acceptable to the company?

6. How do we plan to recover the data in case of a failure?


Determine where to store backups
As we learnt before, SQL Server can back up to hard disk, tapes or named pipe devices. This question above pertains to making a decision as to what media does a company want to use for an efficient backup and restore strategy. While browsing over that question, consider the following facts:

Disk File Backups: Disk backups will store the data on a physical disk i.e. simply a hard drive or a disk array. Disk backups can be performed locally or over the network. They are the most common and easiest medium for storing backups. Once a database has been backed up to a disk file, it can in turn be backed up to tape as a part of the regular enterprise file system backup (on tape). Thus, in case of corrupt or lost data, the disk files will have to be restored from tape and then restore the database into SQL Server from the disk files.

Tape Backup: When you back up databases directly to a tape, the tape drive must be attached locally to the SQL Server. Backing up to remote tape devices is not supported. However, you can have multiple tape devices attached to a single SQL Server. These tapes can then be moved to another location for off-site storage.

Named Pipe backups: SQL Server provides the ability to backup data to a named pipe to allow users to take advantage of the backup and restore features of third-party software packages.

Things to keep in mind
A fact to always remember is that a database can be backed up while it is online and active. That means, it is actively being utilized by clients while it is being backed up. So the database server doesn't have to be down while performing backups. However, it should surely be in a state where it is minimally utilized at that time. This is because, the following operations cannot take place during the backup process:

Creating or deleting database files.

Creating indexes.

Performing non-logged operations.

Shrinking a database.

If you attempt to start a backup operation when one of these operations is in progress, the backup operation aborts. If a backup operation is already in progress and one of these operations is attempted, the operation fails and the backup operation continues.

While online backups are supported, you can't do online restores. During a restore operation, the database must not be in use.

Determine when to backup databases
Your decision as to when and how often you back up your database depends on your particular business environment and the degree of importance of the application. There are also times when you may need to perform unscheduled backups.

Backing up system databases:
The system databases need to be backed up just as user databases are backed up. This allows the system to be rebuilt in the event of system or database failure, for example, if a hard disk fails. Please note that it is always a good practice to create a separate maintenance plan for backing up the system databases and not mix it with the user databases. Within that too, backup the master separately from the other system databases. This is because only full backups of master database are allowed.

The system databases in SQL Server are: master, msdb, model, tempdb, distribution. It is important to have regular backups of these system databases, however, it is not necessary to back up the tempdb system database because it is rebuilt each time SQL Server is started. When SQL Server is shut down, any data in tempdb is deleted permanently. For this reason, do not store any application specific data in the tempdb database. Leave it exclusively for use by SQL Server.

Again, model database needs to be backed up only if it is customized. Similarly distribution database comes into picture only if the server is configured as a Replication Distributor.

Master: The master database contains system information and high-level information about all databases on an SQL Server. If the master database becomes damaged, SQL Server may fail to start and user databases may become unavailable. There are many operations which change the content of the master database - like creating and altering databases, adding and modifying logins, creating linked servers etc. But since one cannot keep backing up master after every such operation, schedule the master database to be backed up on a regular basis (for example: once every night, or once every week depending on the frequency of such changes to the database). This will backup the changes made to the user databases and SQL Server, which can then be recovered in case of a master database corruption.

Note: Only full database backups of master can be performed. Transactional log, differential or filegroup backups of master are not allowed. Thus if you create a Database Maintenance Plan for all the system databases or if you select the master database and you select the Back up the transaction log as part of the maintenance plan option, the backup transaction log step for the master database will fail with this error message:

Backup can not be performed on this database. This sub task is ignored.

Model: The model database is a template, used by Microsoft SQL Server when creating other databases, such as tempdb or user databases. When a new database is created, the entire contents of the model database are copied to the new database. Back up the model database if you modify it, to include the default configuration for all new user databases. If the master or msdb databases are rebuilt, the model database is also rebuilt and therefore changes are lost.

Msdb: The msdb database is used by SQL Server, SQL Server Enterprise Manager, and SQL Server Agent to store data, including scheduled job information, backup and restore history information, DTS packages.

Note: You will notice that, by default, the trunc. log on chkpt database option is set to true, for the msdb database. This helps ensure that the transaction log of the database does not fill up, and prevents problems that may occur due to inadequate disk space. Because the msdb database generally remains rather small, full database backups provide a fast alternative to transaction log backups for this database.

Distribution: The distribution database is used by the replication components of SQL Server, to store data including transactions, snapshot jobs, synchronization status, and replication history information. A server configured to participate either as a remote distribution server or as a combined Publisher/Distributor has a distribution database.

Consult SQL Server Books Online regarding the backup/restore strategies of distribution database in different kinds of replication scenarios.

Backing up user databases:
User databases should be backed up on a regular basis. Also, it needs to be performed after a new database or index is created and when certain non-logged operations are executed.

There are four overall backup and restore strategies, each with its own strengths and weaknesses. A DBA needs to weigh each aspect of the database system and reach a decision, which is the best possible one for the system, users and administrators of the application. The database size and frequency of data modification determine the time and resources involved in implementing a database backup strategy. The four types of backups supported by SQL Server are:

· Backing up only the database

· Backing up the database and the transaction logs

· Differential database backups

· File or Filegroup Backups

Backing up only the database: With this strategy, the entire database is backed up regularly. In case of failure, all the committed transactions that occurred after the most recent database backup, are lost. The primary advantage of using only complete database backups is simplicity. Backing up is a single operation, normally scheduled at regular intervals. And should a restore be necessary, it can be accomplished easily in one step.

Use full database backups if:

· The database is small. The amount of time required to backup a small database is reasonable.

· The database has few data modifications.

· You are willing to accept the loss of changed data if the database fails between backups and must be restored to its previous state.

Backing up the database and the transaction logs: With this strategy, the entire database is backed up less frequently; the transaction log is backed up frequently between database backups. In case of failure, you will be able to recover all backed-up transactions and possibly even committed (complete) transactions that occurred since the last transaction log backup. Only uncommitted (incomplete) transactions will be lost.

Use database and transaction log backups if:

· The database is considerably large or predicted to grow large in the near future.

· There are substantial updates/data modifications taking place on the database.

· In case of a disaster, the need is to recover the database to as recent a state as possible - thus not to lose any transactions taken place on it.

· You cannot afford to lose changes since the most recent database backup.

Differential database backups: This strategy is used to augment either the database backup strategy or a database and transaction log backup strategy. Differential backups consist only of the portions of the database that have changed since the last database backup. The first stage in this strategy is always to take a complete database backup. Then you can schedule the transaction logs as usual. But the interesting part is that now onwards, instead of scheduling a complete database backup, we would now schedule a differential backup after the day's transaction log backups. The differential backup strategy combined with the transaction log backup strategy, reduces the number of transaction log backups that need to be restored while rebuilding/recreating a database.

Use differential backup strategy if:

· The amount of time spent in recovering the database by applying all the transaction logs is not acceptable.

File or filegroup backups: File or filegroup backups are a specialized form of database backup in which only certain individual files or filegroups from a database are backed up. This is usually done when there is not enough time to perform a full database backup. To make use of file and filegroup backups, transaction log backups must be created as well.

Use file or filegroup backup strategy if:

· The databases to be backed up are Very Large Databases (VLDB) which are partitioned among multiple files.

· For example, if you have only one hour to perform a database backup that would normally take four hours, you could create the database using four data files, backup only one file each night, and still ensure data consistency. Transaction log backups could be performed at short intervals during the day.

Restoring databases
The model, msdb, or distribution database may need to be restored from a backup when:

* The master database has been rebuilt using the Rebuild master command prompt utility.
* The model, msdb, or distribution database has been damaged, for example, due to media failure.

If model has been modified, it is necessary to restore model from a backup when you rebuild master because the Rebuild Master utility deletes and re-creates model database.

If msdb contains scheduling or other data used by the system, it is necessary to restore msdb from a backup when you rebuild master because the utility deletes and re-creates msdb, which results in a loss of all scheduling information, alerts, DTS packages etc. If msdb is not restored, and is not accessible, SQL Server Agent cannot access or initiate any previously scheduled tasks. For example, if database backup operations are scheduled to run automatically using SQL Server Agent, a damaged msdb will prevent those backup operations from occurring.

The distribution database is not rebuilt automatically when the Rebuild Master utility is used to rebuild master; therefore it is not necessary to restore distribution after rebuilding master. If the distribution database is still intact, distribution can be re-created automatically by attaching the database to Microsoft SQL Server. Alternatively, a backup of distribution can be restored instead.

However, if distribution is not re-created by restoring a backup or attaching the database, the SQL Server replication utilities will not run, preventing data replication. If the distribution database is used for replication by many Publishers, this can affect many systems.

You cannot restore a database that is being accessed by users. Therefore, when restoring msdb or distribution databases, SQL Server Agent should be stopped. If SQL Server Agent is running, it may access msdb or distribution databases.

Creating a solid disaster recovery plan (DRP)
Disaster recovery is the process by which information systems are recovered in the event of a catastrophe: a natural disaster such as a fire, or technical disaster such as a two-disk failure in a RAID-5 array. Disaster recovery planning is the work devoted to preparing all the actions that will occur in response to a catastrophic event. Disaster recovery assessment is the simulation of a catastrophic event and/or the evaluation of the disaster recovery plan's capability to deliver the specified recovery needs.

Questions that need to be addressed while creating a Disaster Recovery Plan:

1. Are you certain you can recover, in case of a catastrophe wiping out your 24/7 data center?

2. How long will it take you to recover and have your system available for normal functionality?

3. How much data loss can your organization tolerate?

4. How much are we ready to spend in order to achieve the required level of recovery?

Ideally, the disaster recovery plan should state how long the recovery should take, and the final database state the users can expect. It is typically important that management be kept clearly informed of these specifications. Disaster recovery assessment should be able to substantiate the specification.

A disaster recovery plan can be structured in many different ways and can contain many types of information (how and where to get the required hardware, the configuration of the servers, service pack information, who is to communicate what, who are the people to be contacted in the event of a disaster, how are they to be contacted, who owns the administration of the plan, and so on).


The Disaster Recovery Plan for each of the backup scenarios presented above is given below:

DRP for Backing up only the Database Strategy: If the backup strategy is to make complete backups of the databases, then recovery will be performed up to the point when the last full backup was taken.
To recover the database in case of a disaster, simply rebuild the server, restore the last complete backup taken, overwriting the corrupted version of the database.

DRP for Backing up the database and the transaction logs Strategy: Restoring a database that has been backed up using a database and transaction log strategy involves two steps. First rebuild the server, restore the most recent complete database backup. Then apply all of the transaction log backups that were created since the most recent complete database backup.

DRP for The Differential Backup Strategy: Recovery using this strategy requires that you restore the most recent complete database backup and the most recent differential backup. If transaction log backups are also made, only those created since the most latest differential backup need to be applied to fully recover the database.

DRP for The Filegroup Backup Strategy: Recovering using this strategy requires you to first rebuild the server, restore all file and filegroup backups, followed by the restoration of all the transaction log backups taken between the earliest file or filegroup backup and the end of the latest file or filegroup backup.

Tips to save your life as a DBA
Here are a few checklists of activities that an efficient DBA should perform on a regular basis to make life easier and to ensure the reliability of data:

Main Checklist for things to do during the initial setup:

1. Automate all possible jobs and maintenance plans on the server for things like database backups, integrity checks, automatic shrinking, transaction log backups, etc. You could do this by creating Maintenance Plans in SQL Server, which would automatically generate and schedule the required jobs. (For more details on Maintenance Plans, refer to Appendix A).

2. Install SQL Mail on all your production servers and set it up to send you notifications on your email (or cell phone or pager - whatever is convenient).

3. Through SQL Mail, set up email notifications for all the jobs and maintenance plans on the production server - for every database. Also set up email notifications for you to be notified in case of a severity alert - like file growing large and thus reducing disk space, etc.

4. Always keep a script of the functional database schema in a secure location on the network. This comes in handy if you need to know the structure of the database in production or you need to recover a database, which does not have any backup left.

5. Set the MSSQL and SQL Server Agent services to Auto-start when the server starts.

Monthly checklist:

1. Make a list of all the sa passwords for each server and save it in a secure place.

2. Make a list of all the passwords for each login created on the production boxes.

3. Save the SQL Servers’ and Windows' configuration information in a secure place. This information is needed to rebuild an NT & SQL Server box in case of a disaster.

4. Perform a test restore of a database backup. This is done in order to prepare for unforeseen situations.

5. Save information about any changes made to a server - hardware or software.

6. Maintain system logs in a secure fashion. Keep records of all service packs installed for both Microsoft Windows NT Server and Microsoft SQL Server. Keep records of network libraries used, the security mode, sa passwords and service accounts.

7. Assess the steps in recovering from a disaster ahead of time on another server, and amend the steps in your Disaster Recovery Document, as necessary to suit your environment.

8. Audit Database Access: You should periodically perform a review of who has access to your production databases and what type of rights they possess. Doing so can prevent unauthorized access to production data.

Daily Checklist:

1. Check the connectivity of each server over the network. You could do this by pinging the SQL servers twice a day or by clicking the server’s name in your Enterprise Manager and seeing if it is able to connect.

2. Check whether the services are running. For each server, go to its SQL Service Manager and check whether the SQL Server Agent and MSSQL Server services are running (showing a green light). If not, start those services. (You could also check these from the Control Panel or Enterprise Manager).

3. Check whether the scheduled tasks on the production servers are running normally. You could check this from the Enterprise Manager of each server or your email (if you have set up SQL Mail to notify you).

4. Check the hard disk space available on the SQL Servers. If system drives run low on space, they crash.

5. Check all the database and transaction log space on each server. If the database or transaction log space runs out, the transactions will fail.

6. Check NT event Logs for any error messages. SQL Server writes to the NT application log in case of application errors or SQL errors and also warns you before a problem becomes critical.

7. Check SQL Error Logs for any errors occurring within SQL Server. SQL Server warns you through these logs before the problem becomes critical.
As needed Checklist:

1. Run disk defragmentation utilities: You should periodically run disk defragmentation utilities on your server's hard disks. A high degree of hard disk fragmentation can lead to decreased hard disk performance.

Other Useful Tips:

1. While backing up or restoring databases manually from Query Analyzer using BACKUP or RESTORE commands, use the WITH STATS option. This option serves as a progress bar and displays the percentage of work done continuously.

2. Spread your backups across multiple backup devices residing on different hard disk drives. This lets SQL Server take advantage of parallel IO, and improves the backup and restore performance.

3. SQL Server 2000 lets you specify passwords for your backups. Use this feature effectively to prevent unauthorized access to backup files.

4. Consider implementing a combination of transaction log and differential database backups to reduce the time it takes to recover from a failure. This approach reduces the amount of transaction log that must be applied while restoring a database.

APPENDIX A

Creating Database Maintenance Plans

The Database Maintenance Plan Wizard can be used to set up the core maintenance tasks that are necessary to ensure that the database performs well, is regularly backed up in case of system failure, and is checked for inconsistencies. The Database Maintenance Plan Wizard creates SQL Server jobs that perform these maintenance tasks automatically at scheduled intervals.

The maintenance tasks that can be scheduled to run automatically are:

· Reorganizing the data on the data and index pages.

· Compressing data files by removing empty database pages.

· Updating index statistics to ensure the query optimizer has up-to-date information regarding the distribution of data values in the tables.

· Performing internal consistency checks of the data and data pages within the database to ensure that a system or software problem has not damaged data.

· Backing up the database and transaction log files. Database and log backups can be retained for a specified period. This allows you to create a history of backups to be used in the event that you need to restore the database to a time earlier than the last database backup.

The results generated by the maintenance tasks can be written as a report to a text file, HTML file, or the sysdbmaintplan_history tables in the msdb database. The report can also be e-mailed to an operator.

Useful Tips on Maintenance Plans

· It is always recommended to create a separate maintenance plan for the system databases and a separate one for the user databases. This is so that when the backup policies are reviewed depending on the usage of databases, the system databases are not affected due to the changes and their backups are performed uninterrupted.

· Create an operator (with a valid MAPI account) in SQL Server, and make him/her receive all the reports sent by maintenance plans via email. This way he/she would know if a job fails or succeeds each time it is run.

· In the Integrity tab of the maintenance plan wizard, if you check the ‘Attempt to repair any minor problems’, remember - this would bring the database in single user mode. And if there are many users using it at that time, this operation would fail.

Transact-SQL Optimization Tips

Use views and stored procedures instead of heavy-duty queries.
This can reduce network traffic, because your client will send to server only stored procedure or view name (perhaps with some parameters) instead of large heavy-duty queries text. This can be used to facilitate permission management also, because you can restrict user access to table columns they should not see.

Try to use constraints instead of triggers, whenever possible.
Constraints are much more efficient than triggers and can boost performance. So, you should use constraints instead of triggers, whenever possible.

Use table variables instead of temporary tables.
Table variables require less locking and logging resources than temporary tables, so table variables should be used whenever possible. The table variables are available in SQL Server 2000 only.

Try to use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether or not they exist.

Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance degradation, you should use this clause only when it is necessary.

Try to avoid using SQL Server cursors, whenever possible.
SQL Server cursors can result in some performance degradation in comparison with select statements. Try to use correlated sub-query or derived tables, if you need to perform row-by-row operations.

Try to avoid the HAVING clause, whenever possible.
The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.

* If you need to return the total table's row count, you can use alternative way instead of SELECT COUNT(*) statement.
Because SELECT COUNT(*) statement make a full table scan to return the total table's row count, it can take very many time for the large table. There is another way to determine the total row count in a table. You can use sysindexes system table, in this case. There is ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*): SELECT rows FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid <>So, you can improve the speed of such queries in several times.
* Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement.
This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement.
* Try to restrict the queries result set by using the WHERE clause.
This can results in good performance benefits, because SQL Server will return to client only particular rows, not all rows from the table(s). This can reduce network traffic and boost the overall performance of the query.
* Use the select statements with TOP keyword or the SET ROWCOUNT statement, if you need to return only the first n rows.
This can improve performance of your queries, because the smaller result set will be returned. This can also reduce the traffic between the server and the clients.
* Try to restrict the queries result set by returning only the particular columns from the table, not all table's columns.
This can results in good performance benefits, because SQL Server will return to client only particular columns, not all table's columns. This can reduce network traffic and boost the overall performance of the query.

1.Indexes
2.avoid more number of triggers on the table
3.unnecessary complicated joins
4.correct use of Group by clause with the select list
5.in worst cases Denormalization

Index Optimization tips

* Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much. Try to use maximum 4-5 indexes on one table, not more. If you have read-only table, then the number of indexes may be increased.
* Keep your indexes as narrow as possible. This reduces the size of the index and reduces the number of reads required to read the index.
* Try to create indexes on columns that have integer values rather than character values.
* If you create a composite (multi-column) index, the order of the columns in the key are very important. Try to order the columns in the key as to enhance selectivity, with the most selective columns to the leftmost of the key.
* If you want to join several tables, try to create surrogate integer keys for this purpose and create indexes on their columns.
* Create surrogate integer primary key (identity for example) if your table will not have many insert operations.
* Clustered indexes are more preferable than nonclustered, if you need to select by a range of values or you need to sort results set with GROUP BY or ORDER BY.
* If your application will be performing the same query over and over on the same table, consider creating a covering index on the table.
* You can use the SQL Server Profiler Create Trace Wizard with "Identify Scans of Large Tables" trace to determine which tables in your database may need indexes. This trace will show which tables are being scanned by queries instead of using an index.
* You can use sp_MSforeachtable undocumented stored procedure to rebuild all indexes in your database. Try to schedule it to execute during CPU idle time and slow production periods.
sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?')"

T-SQL Queries

1. 2 tables

Employee


Phone

empid
empname
salary
mgrid


empid
phnumber

2. Select all employees who doesn't have phone?
SELECT empname FROM Employee WHERE (empid NOT IN(SELECT DISTINCT empid FROM phone))
3. Select the employee names who is having more than one phone numbers.
SELECT empname FROM employee WHERE (empid IN(SELECT empid FROM phone GROUP BY empid HAVING COUNT(empid) > 1))
4. Select the details of 3 max salaried employees from employee table.
SELECT TOP 3 empid, salary FROM employee ORDER BY salary DESC
5. Display all managers from the table. (manager id is same as emp id)
SELECT empname FROM employee WHERE (empid IN (SELECT DISTINCT mgrid FROM employee))
6. Write a Select statement to list the Employee Name, Manager Name under a particular manager?
SELECT e1.empname AS EmpName, e2.empname AS ManagerName
FROM Employee e1 INNER JOIN
Employee e2 ON e1.mgrid = e2.empid
ORDER BY e2.mgrid
7. 2 tables emp and phone.
emp fields are - empid, name
Ph fields are - empid, ph (office, mobile, home). Select all employees who doesn't have any ph nos.
SELECT *
FROM employee LEFT OUTER JOIN
phone ON employee.empid = phone.empid
WHERE (phone.office IS NULL OR phone.office = ' ')
AND (phone.mobile IS NULL OR phone.mobile = ' ')
AND (phone.home IS NULL OR phone.home = ' ')
8. Find employee who is living in more than one city.
Two Tables:

Emp


City

Empid
empName
Salary


Empid
City

9. SELECT empname, fname, lname
FROM employee
WHERE (empid IN
(SELECT empid
FROM city
GROUP BY empid
HAVING COUNT(empid) > 1))
10. Find all employees who is living in the same city. (table is same as above)
SELECT fname
FROM employee
WHERE (empid IN
(SELECT empid
FROM city a
WHERE city IN
(SELECT city
FROM city b
GROUP BY city
HAVING COUNT(city) > 1)))
11. There is a table named MovieTable with three columns - moviename, person and role. Write a query which gets the movie details where Mr. Amitabh and Mr. Vinod acted and their role is actor.
SELECT DISTINCT m1.moviename
FROM MovieTable m1 INNER JOIN
MovieTable m2 ON m1.moviename = m2.moviename
WHERE (m1.person = 'amitabh' AND m2.person = 'vinod' OR
m2.person = 'amitabh' AND m1.person = 'vinod') AND (m1.role = 'actor') AND (m2.role = 'actor')
ORDER BY m1.moviename
12. There are two employee tables named emp1 and emp2. Both contains same structure (salary details). But Emp2 salary details are incorrect and emp1 salary details are correct. So, write a query which corrects salary details of the table emp2
update a set a.sal=b.sal from emp1 a, emp2 b where a.empid=b.empid
13. Given a Table named “Students” which contains studentid, subjectid and marks. Where there are 10 subjects and 50 students. Write a Query to find out the Maximum marks obtained in each subject.
14. In this same tables now write a SQL Query to get the studentid also to combine with previous results.
15. Three tables – student , course, marks – how do go at finding name of the students who got max marks in the diff courses.
SELECT student.name, course.name AS coursename, marks.sid, marks.mark
FROM marks INNER JOIN
student ON marks.sid = student.sid INNER JOIN
course ON marks.cid = course.cid
WHERE (marks.mark =
(SELECT MAX(Mark)
FROM Marks MaxMark
WHERE MaxMark.cID = Marks.cID))
16. There is a table day_temp which has three columns dayid, day and temperature. How do I write a query to get the difference of temperature among each other for seven days of a week?
SELECT a.dayid, a.dday, a.tempe, a.tempe - b.tempe AS Difference
FROM day_temp a INNER JOIN
day_temp b ON a.dayid = b.dayid + 1
OR
Select a.day, a.degree-b.degree from temperature a, temperature b where a.id=b.id+1
17. There is a table which contains the names like this. a1, a2, a3, a3, a4, a1, a1, a2 and their salaries. Write a query to get grand total salary, and total salaries of individual employees in one query.
SELECT empid, SUM(salary) AS salary
FROM employee
GROUP BY empid WITH ROLLUP
ORDER BY empid
18. How to know how many tables contains empno as a column in a database?
SELECT COUNT(*) AS Counter
FROM syscolumns
WHERE (name = 'empno')
19. Find duplicate rows in a table? OR I have a table with one column which has many records which are not distinct. I need to find the distinct values from that column and number of times it’s repeated.
SELECT sid, mark, COUNT(*) AS Counter
FROM marks
GROUP BY sid, mark
HAVING (COUNT(*) > 1)
20. How to delete the rows which are duplicate (don’t delete both duplicate records).
SET ROWCOUNT 1
DELETE yourtable
FROM yourtable a
WHERE (SELECT COUNT(*) FROM yourtable b WHERE b.name1 = a.name1 AND b.age1 = a.age1) > 1
WHILE @@rowcount > 0
DELETE yourtable
FROM yourtable a
WHERE (SELECT COUNT(*) FROM yourtable b WHERE b.name1 = a.name1 AND b.age1 = a.age1) > 1
SET ROWCOUNT 0
21. How to find 6th highest salary
SELECT TOP 1 salary
FROM (SELECT DISTINCT TOP 6 salary
FROM employee
ORDER BY salary DESC) a
ORDER BY salary
22. Find top salary among two tables
SELECT TOP 1 sal
FROM (SELECT MAX(sal) AS sal
FROM sal1
UNION
SELECT MAX(sal) AS sal
FROM sal2) a
ORDER BY sal DESC
23. Write a query to convert all the letters in a word to upper case
SELECT UPPER('test')
24. Write a query to round up the values of a number. For example even if the user enters 7.1 it should be rounded up to 8.
SELECT CEILING (7.1)
25. Write a SQL Query to find first day of month?
SELECT DATENAME(dw, DATEADD(dd, - DATEPART(dd, GETDATE()) + 1, GETDATE())) AS FirstDay

Datepart


Abbreviations

Year


yy, yyyy

Quarter


qq, q

Month


mm, m

dayofyear


dy, y

Day


dd, d

Week


wk, ww

Weekday


dw

Hour


hh

Minute


mi, n

Second


ss, s

millisecond


ms

26. Table A contains column1 which is primary key and has 2 values (1, 2) and Table B contains column1 which is primary key and has 2 values (2, 3). Write a query which returns the values that are not common for the tables and the query should return one column with 2 records.
SELECT tbla.a
FROM tbla, tblb
WHERE tbla.a <>
(SELECT tblb.a
FROM tbla, tblb
WHERE tbla.a = tblb.a)
UNION
SELECT tblb.a
FROM tbla, tblb
WHERE tblb.a <>
(SELECT tbla.a
FROM tbla, tblb
WHERE tbla.a = tblb.a)

OR (better approach)

SELECT a
FROM tbla
WHERE a NOT IN
(SELECT a
FROM tblb)
UNION ALL
SELECT a
FROM tblb
WHERE a NOT IN
(SELECT a
FROM tbla)
27. There are 3 tables Titles, Authors and Title-Authors (check PUBS db). Write the query to get the author name and the number of books written by that author, the result should start from the author who has written the maximum number of books and end with the author who has written the minimum number of books.
SELECT authors.au_lname, COUNT(*) AS BooksCount
FROM authors INNER JOIN
titleauthor ON authors.au_id = titleauthor.au_id INNER JOIN
titles ON titles.title_id = titleauthor.title_id
GROUP BY authors.au_lname
ORDER BY BooksCount DESC
28.

UPDATE emp_master
SET emp_sal =
CASE
WHEN emp_sal > 0 AND emp_sal <= 20000 THEN (emp_sal * 1.01)
WHEN emp_sal > 20000 THEN (emp_sal * 1.02)
END
29. List all products with total quantity ordered, if quantity ordered is null show it as 0.
SELECT name, CASE WHEN SUM(qty) IS NULL THEN 0 WHEN SUM(qty) > 0 THEN SUM(qty) END AS tot
FROM [order] RIGHT OUTER JOIN
product ON [order].prodid = product.prodid
GROUP BY name
Result:
coke 60
mirinda 0
pepsi 10
30. ANY, SOME, or ALL?
ALL means greater than every value--in other words, greater than the maximum value. For example, >ALL (1, 2, 3) means greater than 3.
ANY means greater than at least one value, that is, greater than the minimum. So >ANY (1, 2, 3) means greater than 1. SOME is an SQL-92 standard equivalent for ANY.
31. IN & = (difference in correlated sub query)

INDEX
32. What is Index? It’s purpose?
Indexes in databases are similar to indexes in books. In a database, an index allows the database program to find data in a table without scanning the entire table. An index in a database is a list of values in a table with the storage locations of rows in the table that contain each value. Indexes can be created on either a single column or a combination of columns in a table and are implemented in the form of B-trees. An index contains an entry with one or more columns (the search key) from each row in a table. A B-tree is sorted on the search key, and can be searched efficiently on any leading subset of the search key. For example, an index on columns A, B, C can be searched efficiently on A, on A, B, and A, B, C.
33. Explain about Clustered and non clustered index? How to choose between a Clustered Index and a Non-Clustered Index?
There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf nodes of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.
Consider using a clustered index for:
* Columns that contain a large number of distinct values.
* Queries that return a range of values using operators such as BETWEEN, >, >=, <, and <=.
* Columns that are accessed sequentially.
* Queries that return large result sets.
Non-clustered indexes have the same B-tree structure as clustered indexes, with two significant differences:
* The data rows are not sorted and stored in order based on their non-clustered keys.
* The leaf layer of a non-clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows. Each index row contains the non-clustered key value and one or more row locators that point to the data row (or rows if the index is not unique) having the key value.
* Per table only 249 non clustered indexes.
34. Disadvantage of index?
Every index increases the time in takes to perform INSERTS, UPDATES and DELETES, so the number of indexes should not be very much.
35. Given a scenario that I have a 10 Clustered Index in a Table to all their 10 Columns. What are the advantages and disadvantages?
A: Only 1 clustered index is possible.
36. How can I enforce to use particular index?
You can use index hint (index=) after the table name.
SELECT au_lname FROM authors (index=aunmind)
37. What is Index Tuning?
One of the hardest tasks facing database administrators is the selection of appropriate columns for non-clustered indexes. You should consider creating non-clustered indexes on any columns that are frequently referenced in the WHERE clauses of SQL statements. Other good candidates are columns referenced by JOIN and GROUP BY operations.
You may wish to also consider creating non-clustered indexes that cover all of the columns used by certain frequently issued queries. These queries are referred to as “covered queries” and experience excellent performance gains.
Index Tuning is the process of finding appropriate column for non-clustered indexes.
SQL Server provides a wonderful facility known as the Index Tuning Wizard which greatly enhances the index selection process.
38. Difference between Index defrag and Index rebuild?
When you create an index in the database, the index information used by queries is stored in index pages. The sequential index pages are chained together by pointers from one page to the next. When changes are made to the data that affect the index, the information in the index can become scattered in the database. Rebuilding an index reorganizes the storage of the index data (and table data in the case of a clustered index) to remove fragmentation. This can improve disk performance by reducing the number of page reads required to obtain the requested data
DBCC INDEXDEFRAG - Defragments clustered and secondary indexes of the specified table or view.
**
39. What is sorting and what is the difference between sorting & clustered indexes?
The ORDER BY clause sorts query results by one or more columns up to 8,060 bytes. This will happen by the time when we retrieve data from database. Clustered indexes physically sorting data, while inserting/updating the table.
40. What are statistics, under what circumstances they go out of date, how do you update them?
Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.
Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version
41. What is fillfactor? What is the use of it ? What happens when we ignore it? When you should use low fill factor?
When you create a clustered index, the data in the table is stored in the data pages of the database according to the order of the values in the indexed columns. When new rows of data are inserted into the table or the values in the indexed columns are changed, Microsoft® SQL Server™ 2000 may have to reorganize the storage of the data in the table to make room for the new row and maintain the ordered storage of the data. This also applies to nonclustered indexes. When data is added or changed, SQL Server may have to reorganize the storage of the data in the nonclustered index pages. When a new row is added to a full index page, SQL Server moves approximately half the rows to a new page to make room for the new row. This reorganization is known as a page split. Page splitting can impair performance and fragment the storage of the data in a table.
When creating an index, you can specify a fill factor to leave extra gaps and reserve a percentage of free space on each leaf level page of the index to accommodate future expansion in the storage of the table's data and reduce the potential for page splits. The fill factor value is a percentage from 0 to 100 that specifies how much to fill the data pages after the index is created. A value of 100 means the pages will be full and will take the least amount of storage space. This setting should be used only when there will be no changes to the data, for example, on a read-only table. A lower value leaves more empty space on the data pages, which reduces the need to split data pages as indexes grow but requires more storage space. This setting is more appropriate when there will be changes to the data in the table.

DATA TYPES
42. What are the data types in SQL

Bigint


Binary


bit


char


cursor

Datetime


Decimal


float


image


int

Money


Nchar


ntext


nvarchar


real

smalldatetime


Smallint


smallmoney


text


timestamp

tinyint


Varbinary


Varchar


uniqueidentifier


43. Difference between char and nvarchar / char and varchar data-type?
char[(n)] - Fixed-length non-Unicode character data with length of n bytes. n must be a value from 1 through 8,000. Storage size is n bytes. The SQL-92 synonym for char is character.
nvarchar(n) - Variable-length Unicode character data of n characters. n must be a value from 1 through 4,000. Storage size, in bytes, is two times the number of characters entered. The data entered can be 0 characters in length. The SQL-92 synonyms for nvarchar are national char varying and national character varying.
varchar[(n)] - Variable-length non-Unicode character data with length of n bytes. n must be a value from 1 through 8,000. Storage size is the actual length in bytes of the data entered, not n bytes. The data entered can be 0 characters in length. The SQL-92 synonyms for varchar are char varying or character varying.
44. GUID datasize?
128bit
45. How GUID becoming unique across machines?
To ensure uniqueness across machines, the ID of the network card is used (among others) to compute the number.
46. What is the difference between text and image data type?
Text and image. Use text for character data if you need to store more than 255 characters in SQL Server 6.5, or more than 8000 in SQL Server 7.0. Use image for binary large objects (BLOBs) such as digital images. With text and image data types, the data is not stored in the row, so the limit of the page size does not apply.All that is stored in the row is a pointer to the database pages that contain the data.Individual text, ntext, and image values can be a maximum of 2-GB, which is too long to store in a single data row.

1. JOINS
2. What are joins?
Sometimes we have to select data from two or more tables to make our result complete. We have to perform a join.
3. How many types of Joins?
Joins can be categorized as:
* Inner joins (the typical join operation, which uses some comparison operator like = or <>). These include equi-joins and natural joins.
Inner joins use a comparison operator to match rows from two tables based on the values in common columns from each table. For example, retrieving all rows where the student identification number is the same in both the students and courses tables.
* Outer joins. Outer joins can be a left, a right, or full outer join.
Outer joins are specified with one of the following sets of keywords when they are specified in the FROM clause:
o LEFT JOIN or LEFT OUTER JOIN -The result set of a left outer join includes all the rows from the left table specified in the LEFT OUTER clause, not just the ones in which the joined columns match. When a row in the left table has no matching rows in the right table, the associated result set row contains null values for all select list columns coming from the right table.
o RIGHT JOIN or RIGHT OUTER JOIN - A right outer join is the reverse of a left outer join. All rows from the right table are returned. Null values are returned for the left table any time a right table row has no matching row in the left table.
o FULL JOIN or FULL OUTER JOIN - A full outer join returns all rows in both the left and right tables. Any time a row has no match in the other table, the select list columns from the other table contain null values. When there is a match between the tables, the entire result set row contains data values from the base tables.
* Cross joins - Cross joins return all rows from the left table, each row from the left table is combined with all rows from the right table. Cross joins are also called Cartesian products. (A Cartesian join will get you a Cartesian product. A Cartesian join is when you join every row of one table to every row of another table. You can also get one by joining every row of a table to every row of itself.)
4. What is self join?
A table can be joined to itself in a self-join.
5. What are the differences between UNION and JOINS?
A join selects columns from 2 or more tables. A union selects rows.
6. Can I improve performance by using the ANSI-style joins instead of the old-style joins?
Code Example 1:
select o.name, i.name
from sysobjects o, sysindexes i
where o.id = i.id
Code Example 2:
select o.name, i.name
from sysobjects o inner join sysindexes i
on o.id = i.id
You will not get any performance gain by switching to the ANSI-style JOIN syntax.
Using the ANSI-JOIN syntax gives you an important advantage: Because the join logic is cleanly separated from the filtering criteria, you can understand the query logic more quickly.
The SQL Server old-style JOIN executes the filtering conditions before executing the joins, whereas the ANSI-style JOIN reverses this procedure (join logic precedes filtering).
Perhaps the most compelling argument for switching to the ANSI-style JOIN is that Microsoft has explicitly stated that SQL Server will not support the old-style OUTER JOIN syntax indefinitely. Another important consideration is that the ANSI-style JOIN supports query constructions that the old-style JOIN syntax does not support.
7. What is derived table?
Derived tables are SELECT statements in the FROM clause referred to by an alias or a user-specified name. The result set of the SELECT in the FROM clause forms a table used by the outer SELECT statement. For example, this SELECT uses a derived table to find if any store carries all book titles in the pubs database:
SELECT ST.stor_id, ST.stor_name
FROM stores AS ST,
(SELECT stor_id, COUNT(DISTINCT title_id) AS title_count
FROM sales
GROUP BY stor_id
) AS SA
WHERE ST.stor_id = SA.stor_id
AND SA.title_count = (SELECT COUNT(*) FROM titles)

STORED PROCEDURE
8. What is Stored procedure?
A stored procedure is a set of Structured Query Language (SQL) statements that you assign a name to and store in a database in compiled form so that you can share it between a number of programs.
* They allow modular programming.
* They allow faster execution.
* They can reduce network traffic.
* They can be used as a security mechanism.
9. What are the different types of Storage Procedure?

a. Temporary Stored Procedures - SQL Server supports two types of temporary procedures: local and global. A local temporary procedure is visible only to the connection that created it. A global temporary procedure is available to all connections. Local temporary procedures are automatically dropped at the end of the current session. Global temporary procedures are dropped at the end of the last session using the procedure. Usually, this is when the session that created the procedure ends. Temporary procedures named with # and ## can be created by any user.

b. System stored procedures are created and stored in the master database and have the sp_ prefix.(or xp_) System stored procedures can be executed from any database without having to qualify the stored procedure name fully using the database name master. (If any user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.)

c. Automatically Executing Stored Procedures - One or more stored procedures can execute automatically when SQL Server starts. The stored procedures must be created by the system administrator and executed under the sysadmin fixed server role as a background process. The procedure(s) cannot have any input parameters.

d. User stored procedure

10. How do I mark the stored procedure to automatic execution?
You can use the sp_procoption system stored procedure to mark the stored procedure to automatic execution when the SQL Server will start. Only objects in the master database owned by dbo can have the startup setting changed and this option is restricted to objects that have no parameters.
USE master
EXEC sp_procoption 'indRebuild', 'startup', 'true')
11. How can you optimize a stored procedure?
12. How will know whether the SQL statements are executed?
When used in a stored procedure, the RETURN statement can specify an integer value to return to the calling application, batch, or procedure. If no value is specified on RETURN, a stored procedure returns the value 0. The stored procedures return a value of 0 when no errors were encountered. Any nonzero value indicates an error occurred.
13. Why one should not prefix user stored procedures with sp_?
It is strongly recommended that you do not create any stored procedures using sp_ as a prefix. SQL Server always looks for a stored procedure beginning with sp_ in this order:

0. The stored procedure in the master database.

1. The stored procedure based on any qualifiers provided (database name or owner).

2. The stored procedure using dbo as the owner, if one is not specified.

Therefore, although the user-created stored procedure prefixed with sp_ may exist in the current database, the master database is always checked first, even if the stored procedure is qualified with the database name.

14. What can cause a Stored procedure execution plan to become invalidated and/or fall out of cache?

0. Server restart

1. Plan is aged out due to low use

2. DBCC FREEPROCCACHE (sometime desired to force it)

15. When do one need to recompile stored procedure?
if a new index is added from which the stored procedure might benefit, optimization does not automatically happen (until the next time the stored procedure is run after SQL Server is restarted).
16. SQL Server provides three ways to recompile a stored procedure:

* The sp_recompile system stored procedure forces a recompile of a stored procedure the next time it is run.
* Creating a stored procedure that specifies the WITH RECOMPILE option in its definition indicates that SQL Server does not cache a plan for this stored procedure; the stored procedure is recompiled each time it is executed. Use the WITH RECOMPILE option when stored procedures take parameters whose values differ widely between executions of the stored procedure, resulting in different execution plans to be created each time. Use of this option is uncommon, and causes the stored procedure to execute more slowly because the stored procedure must be recompiled each time it is executed.
* You can force the stored procedure to be recompiled by specifying the WITH RECOMPILE option when you execute the stored procedure. Use this option only if the parameter you are supplying is atypical or if the data has significantly changed since the stored procedure was created.
16. How to find out which stored procedure is recompiling? How to stop stored procedures from recompiling?
17. I have Two Stored Procedures SP1 and SP2 as given below. How the Transaction works, whether SP2 Transaction succeeds or fails?
CREATE PROCEDURE SP1 AS
BEGIN TRAN
INSERT INTO MARKS (SID,MARK,CID) VALUES (5,6,3)
EXEC SP2
ROLLBACK
GO

CREATE PROCEDURE SP2 AS
BEGIN TRAN
INSERT INTO MARKS (SID,MARK,CID) VALUES (100,100,103)
commit tran
GO
Both will get roll backed.
18. CREATE PROCEDURE SP1 AS
BEGIN TRAN
INSERT INTO MARKS (SID,MARK,CID) VALUES (5,6,3)
BEGIN TRAN
INSERT INTO STUDENT (SID,NAME1) VALUES (1,'SA')
commit tran
ROLLBACK TRAN
GO
Both will get roll backed.
19. How will you handle Errors in Sql Stored Procedure?
INSERT NonFatal VALUES (@Column2)
IF @@ERROR <>0
BEGIN
PRINT 'Error Occured'
END
http://www.sqlteam.com/item.asp?ItemID=2463
20. How will you raise an error in sql?
RAISERROR - Returns a user-defined error message and sets a system flag to record that an error has occurred. Using RAISERROR, the client can either retrieve an entry from the sysmessages table or build a message dynamically with user-specified severity and state information. After the message is defined it is sent back to the client as a server error message.
21. I have a stored procedure like
commit tran
create table a()
insert into table b
--
--
rollback tran
what will be the result? Is table created? data will be inserted in table b?
22. What do you do when one procedure is blocking the other?
**
23. How you will return XML from Stored Procedure?
You use the FOR XML clause of the SELECT statement, and within the FOR XML clause you specify an XML mode: RAW, AUTO, or EXPLICIT.
24. What are the differences between RAW, AUTO and Explicit modes in retrieving data from SQL Server in XML format?
**
25. Can a Stored Procedure call itself (recursive). If so then up to what level and can it be control?
Stored procedures are nested when one stored procedure calls another. You can nest stored procedures up to 32 levels. The nesting level increases by one when the called stored procedure begins execution and decreases by one when the called stored procedure completes execution. Attempting to exceed the maximum of 32 levels of nesting causes the whole calling stored procedure chain to fail. The current nesting level for the stored procedures in execution is stored in the @@NESTLEVEL function.
eg:
SET NOCOUNT ON
USE master
IF OBJECT_ID('dbo.sp_calcfactorial') IS NOT NULL
DROP PROC dbo.sp_calcfactorial
GO
CREATE PROC dbo.sp_calcfactorial
@base_number int, @factorial int OUT
AS
DECLARE @previous_number int
IF (@base_number<2) SET @factorial=1 -- Factorial of 0 or 1=1
ELSE BEGIN
SET @previous_number=@base_number-1
EXEC dbo.sp_calcfactorial @previous_number, @factorial OUT -- Recursive call
IF (@factorial=-1) RETURN(-1) -- Got an error, return
SET @factorial=@factorial*@base_number
END
RETURN(0)
GO

calling proc.
DECLARE @factorial int
EXEC dbo.sp_calcfactorial 4, @factorial OUT
SELECT @factorial
26. Nested Triggers
Triggers are nested when a trigger performs an action that initiates another trigger, which can initiate another trigger, and so on. Triggers can be nested up to 32 levels, and you can control whether triggers can be nested through the nested triggers server configuration option.
27. What is an extended stored procedure? Can you instantiate a COM object by using T-SQL?
An extended stored procedure is a function within a DLL (written in a programming language like C, C++ using Open Data Services (ODS) API) that can be called from T-SQL, just the way we call normal stored procedures using the EXEC statement.
28. Difference between view and stored procedure?
Views can have only select statements (create, update, truncate, delete statements are not allowed) Views cannot have “select into”, “Group by” “Having”, ”Order by”
29. What is a Function & what are the different user defined functions?
Function is a saved Transact-SQL routine that returns a value. User-defined functions cannot be used to perform a set of actions that modify the global database state. User-defined functions, like system functions, can be invoked from a query. They also can be executed through an EXECUTE statement like stored procedures.

0. Scalar Functions
Functions are scalar-valued if the RETURNS clause specified one of the scalar data types
1. Inline Table-valued Functions
If the RETURNS clause specifies TABLE with no accompanying column list, the function is an inline function.
2. Multi-statement Table-valued Functions
If the RETURNS clause specifies a TABLE type with columns and their data types, the function is a multi-statement table-valued function.
30. What are the difference between a function and a stored procedure?
0. Functions can be used in a select statement where as procedures cannot
1. Procedure takes both input and output parameters but Functions takes only input parameters
2. Functions cannot return values of type text, ntext, image & timestamps where as procedures can
3. Functions can be used as user defined datatypes in create table but procedures cannot
***Eg:-create table (name varchar(10),salary getsal(name))
Here getsal is a user defined function which returns a salary type, when table is created no storage is allotted for salary type, and getsal function is also not executed, But when we are fetching some values from this table, getsal function get’s executed and the return
Type is returned as the result set.
31. How to debug a stored procedure?

1. TRIGGER
2. What is Trigger? What is its use? What are the types of Triggers? What are the new kinds of triggers in sql 2000?
Triggers are a special class of stored procedure defined to execute automatically when an UPDATE, INSERT, or DELETE statement is issued against a table or view. Triggers are powerful tools that sites can use to enforce their business rules automatically when data is modified.
The CREATE TRIGGER statement can be defined with the FOR UPDATE, FOR INSERT, or FOR DELETE clauses to target a trigger to a specific class of data modification actions. When FOR UPDATE is specified, the IF UPDATE (column_name) clause can be used to target a trigger to updates affecting a particular column.
You can use the FOR clause to specify when a trigger is executed:
* AFTER (default) - The trigger executes after the statement that triggered it completes. If the statement fails with an error, such as a constraint violation or syntax error, the trigger is not executed. AFTER triggers cannot be specified for views.
* INSTEAD OF -The trigger executes in place of the triggering action. INSTEAD OF triggers can be specified on both tables and views. You can define only one INSTEAD OF trigger for each triggering action (INSERT, UPDATE, and DELETE). INSTEAD OF triggers can be used to perform enhance integrity checks on the data values supplied in INSERT and UPDATE statements. INSTEAD OF triggers also let you specify actions that allow views, which would normally not support updates, to be updatable.
An INSTEAD OF trigger can take actions such as:
o Ignoring parts of a batch.
o Not processing a part of a batch and logging the problem rows.
o Taking an alternative action if an error condition is encountered.

In SQL Server 6.5 you could define only 3 triggers per table, one for INSERT, one for UPDATE and one for DELETE. From SQL Server 7.0 onwards, this restriction is gone, and you could create multiple triggers per each action. But in 7.0 there's no way to control the order in which the triggers fire. In SQL Server 2000 you could specify which trigger fires first or fires last using sp_settriggerorder.
Till SQL Server 7.0, triggers fire only after the data modification operation happens. So in a way, they are called post triggers. But in SQL Server 2000 you could create pre triggers also.

3. When should one use "instead of Trigger"? Example
CREATE TABLE BaseTable
(
PrimaryKey int IDENTITY(1,1),
Color nvarchar(10) NOT NULL,
Material nvarchar(10) NOT NULL,
ComputedCol AS (Color + Material)
)
GO

--Create a view that contains all columns from the base table.
CREATE VIEW InsteadView
AS SELECT PrimaryKey, Color, Material, ComputedCol
FROM BaseTable
GO

--Create an INSTEAD OF INSERT trigger on tthe view.
CREATE TRIGGER InsteadTrigger on InsteadView
INSTEAD OF INSERT
AS
BEGIN
--Build an INSERT statement ignoring inserrted.PrimaryKey and
--inserted.ComputedCol.
INSERT INTO BaseTable
SELECT Color, Material
FROM inserted
END
GO

-- can insert value to basetable by this insert into basetable(color,material) values ('red','abc')

-- insert into InsteadView(color,material)) values ('red','abc') can't do this.
-- It will give error "'PrimaryKey' iin table 'InsteadView' cannot be null."

-- can insert value through table by this<
insert into InsteadView values (1,'red','abc',1) --PrimaryKey, ComputedCol wont take values from here
4. Difference between trigger and stored procedure?
Trigger will get execute automatically when an UPDATE, INSERT, or DELETE statement is issued against a table or view.
We have to call stored procedure manually, or it can execute automatic when the SQL Server starts (You can use the sp_procoption system stored procedure to mark the stored procedure to automatic execution when the SQL Server will start.
5. The following trigger generates an e-mail whenever a new title is added.
CREATE TRIGGER reminder
ON titles
FOR INSERT
AS
EXEC master..xp_sendmail 'MaryM', 'New title, mention in the next report to distributors.'
6. Drawback of trigger? Its alternative solution?
Triggers are generally used to implement business rules, auditing. Triggers can also be used to extend the referential integrity checks, but wherever possible, use constraints for this purpose, instead of triggers, as constraints are much faster.

LOCK
7. What are locks?
Microsoft® SQL Server™ 2000 uses locking to ensure transactional integrity and database consistency. Locking prevents users from reading data being changed by other users, and prevents multiple users from changing the same data at the same time. If locking is not used, data within the database may become logically incorrect, and queries executed against that data may produce unexpected results.
8. What are the different types of locks?
SQL Server uses these resource lock modes.

Lock mode


Description

Shared (S)


Used for operations that do not change or update data (read-only operations), such as a SELECT statement.

Update (U)


Used on resources that can be updated. Prevents a common form of deadlock that occurs when multiple sessions are reading, locking, and potentially updating resources later.

Exclusive (X)


Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.

Intent


Used to establish a lock hierarchy. The types of intent locks are: intent shared (IS), intent exclusive (IX), and shared with intent exclusive (SIX).

Schema


Used when an operation dependent on the schema of a table is executing. The types of schema locks are: schema modification (Sch-M) and schema stability (Sch-S).

Bulk Update (BU)


Used when bulk-copying data into a table and the TABLOCK hint is specified.

9. What is a dead lock? Give a practical sample? How you can minimize the deadlock situation? What is a deadlock and what is a live lock? How will you go about resolving deadlocks?
Deadlock is a situation when two processes, each having a lock on one piece of data, attempt to acquire a lock on the other's piece. Each process would wait indefinitely for the other to release the lock, unless one of the user processes is terminated. SQL Server detects deadlocks and terminates one user's process.
A livelock is one, where a request for an exclusive lock is repeatedly denied because a series of overlapping shared locks keeps interfering. SQL Server detects the situation after four denials and refuses further shared locks. (A livelock also occurs when read transactions monopolize a table or page, forcing a write transaction to wait indefinitely.)
10. What is isolation level?
An isolation level determines the degree of isolation of data between concurrent transactions. The default SQL Server isolation level is Read Committed. A lower isolation level increases concurrency, but at the expense of data correctness. Conversely, a higher isolation level ensures that data is correct, but can affect concurrency negatively. The isolation level required by an application determines the locking behavior SQL Server uses.
SQL-92 defines the following isolation levels, all of which are supported by SQL Server:
* Read uncommitted (the lowest level where transactions are isolated only enough to ensure that physically corrupt data is not read).
* Read committed (SQL Server default level).
* Repeatable read.
* Serializable (the highest level, where transactions are completely isolated from one another).

Isolation level


Dirty read


Nonrepeatable read


Phantom

Read uncommitted


Yes


Yes


Yes

Read committed


No


Yes


Yes

Repeatable read


No


No


Yes

Serializable


No


No


No

11. Uncommitted Dependency (Dirty Read) - Uncommitted dependency occurs when a second transaction selects a row that is being updated by another transaction. The second transaction is reading data that has not been committed yet and may be changed by the transaction updating the row. For example, an editor is making changes to an electronic document. During the changes, a second editor takes a copy of the document that includes all the changes made so far, and distributes the document to the intended audience.
Inconsistent Analysis (Nonrepeatable Read) Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time. Inconsistent analysis is similar to uncommitted dependency in that another transaction is changing the data that a second transaction is reading. However, in inconsistent analysis, the data read by the second transaction was committed by the transaction that made the change. Also, inconsistent analysis involves multiple reads (two or more) of the same row and each time the information is changed by another transaction; thus, the term nonrepeatable read. For example, an editor reads the same document twice, but between each reading, the writer rewrites the document. When the editor reads the document for the second time, it has changed.
Phantom Reads Phantom reads occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction. The transaction's first read of the range of rows shows a row that no longer exists in the second or succeeding read, as a result of a deletion by a different transaction. Similarly, as the result of an insert by a different transaction, the transaction's second or succeeding read shows a row that did not exist in the original read. For example, an editor makes changes to a document submitted by a writer, but when the changes are incorporated into the master copy of the document by the production department, they find that new unedited material has been added to the document by the author. This problem could be avoided if no one could add new material to the document until the editor and production department finish working with the original document.
12. nolock? What is the difference between the REPEATABLE READ and SERIALIZE isolation levels?
Locking Hints - A range of table-level locking hints can be specified using the SELECT, INSERT, UPDATE, and DELETE statements to direct Microsoft® SQL Server 2000 to the type of locks to be used. Table-level locking hints can be used when a finer control of the types of locks acquired on an object is required. These locking hints override the current transaction isolation level for the session.

Locking hint


Description

HOLDLOCK


Hold a shared lock until completion of the transaction instead of releasing the lock as soon as the required table, row, or data page is no longer required. HOLDLOCK is equivalent to SERIALIZABLE.

NOLOCK


Do not issue shared locks and do not honor exclusive locks. When this option is in effect, it is possible to read an uncommitted transaction or a set of pages that are rolled back in the middle of a read. Dirty reads are possible. Only applies to the SELECT statement.

PAGLOCK


Use page locks where a single table lock would usually be taken.

READCOMMITTED


Perform a scan with the same locking semantics as a transaction running at the READ COMMITTED isolation level. By default, SQL Server 2000 operates at this isolation level.

READPAST


Skip locked rows. This option causes a transaction to skip rows locked by other transactions that would ordinarily appear in the result set, rather than block the transaction waiting for the other transactions to release their locks on these rows. The READPAST lock hint applies only to transactions operating at READ COMMITTED isolation and will read only past row-level locks. Applies only to the SELECT statement.

READUNCOMMITTED


Equivalent to NOLOCK.

REPEATABLEREAD


Perform a scan with the same locking semantics as a transaction running at the REPEATABLE READ isolation level.

ROWLOCK


Use row-level locks instead of the coarser-grained page- and table-level locks.

SERIALIZABLE


Perform a scan with the same locking semantics as a transaction running at the SERIALIZABLE isolation level. Equivalent to HOLDLOCK.

TABLOCK


Use a table lock instead of the finer-grained row- or page-level locks. SQL Server holds this lock until the end of the statement. However, if you also specify HOLDLOCK, the lock is held until the end of the transaction.

TABLOCKX


Use an exclusive lock on a table. This lock prevents others from reading or updating the table and is held until the end of the statement or transaction.

UPDLOCK


Use update locks instead of shared locks while reading a table, and hold locks until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it.

XLOCK


Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement. This lock can be specified with either PAGLOCK or TABLOCK, in which case the exclusive lock applies to the appropriate level of granularity.

13. For example, if the transaction isolation level is set to SERIALIZABLE, and the table-level locking hint NOLOCK is used with the SELECT statement, key-range locks typically used to maintain serializable transactions are not taken.
USE pubs
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
SELECT au_lname FROM authors WITH (NOLOCK)
GO
14. What is escalation of locks?
Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Every lock is a memory structure too many locks would mean, more memory being occupied by locks. To prevent this from happening, SQL Server escalates the many fine-grain locks to fewer coarse-grain locks. Lock escalation threshold was definable in SQL Server 6.5, but from SQL Server 7.0 onwards it's dynamically managed by SQL Server.

1. VIEW
2. What is View? Use? Syntax of View?
A view is a virtual table made up of data from base tables and other views, but not stored separately.
* Views simplify users perception of the database (can be used to present only the necessary information while hiding details in underlying relations)
* Views improve data security preventing undesired accesses
* Views facilite the provision of additional data independence
3. Does the View occupy memory space?
No
4. Can u drop a table if it has a view?
Views or tables participating in a view created with the SCHEMABINDING clause cannot be dropped. If the view is not created using SCHEMABINDING, then we can drop the table.
5. Why doesn't SQL Server permit an ORDER BY clause in the definition of a view?
SQL Server excludes an ORDER BY clause from a view to comply with the ANSI SQL-92 standard. Because analyzing the rationale for this standard requires a discussion of the underlying structure of the structured query language (SQL) and the mathematics upon which it is based, we can't fully explain the restriction here. However, if you need to be able to specify an ORDER BY clause in a view, consider using the following workaround:
USE pubs
GO
CREATE VIEW AuthorsByName
AS
SELECT TOP 100 PERCENT *
FROM authors
ORDER BY au_lname, au_fname
GO
The TOP construct, which Microsoft introduced in SQL Server 7.0, is most useful when you combine it with the ORDER BY clause. The only time that SQL Server supports an ORDER BY clause in a view is when it is used in conjunction with the TOP keyword. (Note that the TOP keyword is a SQL Server extension to the ANSI SQL-92 standard.)

TRANSACTION
6. What is Transaction?
A transaction is a sequence of operations performed as a single logical unit of work. A logical unit of work must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction:
* Atomicity - A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed.
* Consistency - When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction's modifications to maintain all data integrity. All internal data structures, such as B-tree indexes or doubly-linked lists, must be correct at the end of the transaction.
* Isolation - Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed.
* Durability - After a transaction has completed, its effects are permanently in place in the system. The modifications persist even in the event of a system failure.
7. After one Begin Transaction a truncate statement and a RollBack statements are there. Will it be rollbacked? Since the truncate statement does not perform logged operation how does it RollBack?
It will rollback.
**
8. Given a SQL like
Begin Tran
Select @@Rowcount
Begin Tran
Select @@Rowcount
Begin Tran
Select @@Rowcount
Commit Tran
Select @@Rowcount
RollBack
Select @@Rowcount
RollBack
Select @@Rowcount
What is the value of @@Rowcount at each stmt levels?
Ans : 0 – zero.
@@ROWCOUNT - Returns the number of rows affected by the last statement.
@@TRANCOUNT - Returns the number of active transactions for the current connection.
Each Begin Tran will add count, each commit will reduce count and ONE rollback will make it 0.

OTHER
9. What are the constraints for Table Constraints define rules regarding the values allowed in columns and are the standard mechanism for enforcing integrity. SQL Server 2000 supports five classes of constraints.
NOT NULL
CHECK
UNIQUE
PRIMARY KEY
FOREIGN KEY
10. There are 50 columns in a table. Write a query to get first 25 columns
Ans: Need to mention each column names.
11. How to list all the tables in a particular database?
USE pubs
GO
sp_help
12. What are cursors? Explain different types of cursors. What are the disadvantages of cursors? How can you avoid cursors?
Cursors allow row-by-row processing of the result sets.
Types of cursors: Static, Dynamic, Forward-only, Keyset-driven.
Disadvantages of cursors: Each time you fetch a row from the cursor, it results in a network roundtrip. Cursors are also costly because they require more resources and temporary storage (results in more IO operations). Further, there are restrictions on the SELECT statements that can be used with some types of cursors.
How to avoid cursor:

1. Most of the times, set based operations can be used instead of cursors. Here is an example: If you have to give a flat hike to your employees using the following criteria:
Salary between 30000 and 40000 -- 5000 hike
Salary between 40000 and 55000 -- 7000 hike
Salary between 55000 and 65000 -- 9000 hike
In this situation many developers tend to use a cursor, determine each employee's salary and update his salary according to the above formula. But the same can be achieved by multiple update statements or can be combined in a single UPDATE statement as shown below:
UPDATE tbl_emp SET salary =
CASE WHEN salary BETWEEN 30000 AND 40000 THEN salary + 5000
WHEN salary BETWEEN 40000 AND 55000 THEN salary + 7000
WHEN salary BETWEEN 55000 AND 65000 THEN salary + 10000
END

2. You need to call a stored procedure when a column in a particular row meets certain condition. You don't have to use cursors for this. This can be achieved using WHILE loop, as long as there is a unique key to identify each row. For examples of using WHILE loop for row by row processing, check out the 'My code library' section of my site or search for WHILE.

13. What is Dynamic Cursor? Suppose, I have a dynamic cursor attached to table in a database. I have another means by which I will modify the table. What do you think will the values in the cursor be?
Dynamic cursors reflect all changes made to the rows in their result set when scrolling through the cursor. The data values, order, and membership of the rows in the result set can change on each fetch. All UPDATE, INSERT, and DELETE statements made by all users are visible through the cursor. Updates are visible immediately if they are made through the cursor using either an API function such as SQLSetPos or the Transact-SQL WHERE CURRENT OF clause. Updates made outside the cursor are not visible until they are committed, unless the cursor transaction isolation level is set to read uncommitted.
14. What is DATEPART?
Returns an integer representing the specified datepart of the specified date.
15. Difference between Delete and Truncate?
TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both remove all rows in the table.
(1) But TRUNCATE TABLE is faster and uses fewer system and transaction log resources than DELETE. The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log.
(2) Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
(3) The counter used by an identity for new rows is reset to the seed for the column. If you want to retain the identity counter, use DELETE instead.
Of course, TRUNCATE TABLE can be rolled back.
16. Given a scenario where two operations, Delete Stmt and Truncate Stmt, where the Delete Statement was successful and the truncate stmt was failed. – Can u judge why?
**
17. What are global variables? Tell me some of them?
Transact-SQL global variables are a form of function and are now referred to as functions.
ABS - Returns the absolute, positive value of the given numeric expression.
SUM
AVG
AND
18. What is DDL?
Data definition language (DDL) statements are SQL statements that support the definition or declaration of database objects (for example, CREATE TABLE, DROP TABLE, and ALTER TABLE).
You can use the ADO Command object to issue DDL statements. To differentiate DDL statements from a table or stored procedure name, set the CommandType property of the Command object to adCmdText. Because executing DDL queries with this method does not generate any recordsets, there is no need for a Recordset object.
19. What is DML?
Data Manipulation Language (DML), which is used to select, insert, update, and delete data in the objects defined using DDL
20. What are keys in RDBMS? What is a primary key/ foreign key?
There are two kinds of keys.
A primary key is a set of columns from a table that are guaranteed to have unique values for each row of that table.
Foreign keys are attributes of one table that have matching values in a primary key in another table, allowing for relationships between tables.
21. What is the difference between Primary Key and Unique Key?
Both primary key and unique key enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.
22. Define candidate key, alternate key, composite key?
A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys.
A key formed by combining at least two or more columns is called composite key.
23. What is the Referential Integrity?
Referential integrity refers to the consistency that must be maintained between primary and foreign keys, i.e. every foreign key value must have a corresponding primary key value.
24. What are defaults? Is there a column to which a default can't be bound?
A default is a value that will be used by a column, if no value is supplied to that column while inserting data. IDENTITY columns and timestamp columns can't have defaults bound to them.
25. What is Query optimization? How is tuning a performance of query done?
26. What is the use of trace utility?
**
27. What is the use of shell commands? xp_cmdshell
Executes a given command string as an operating-system command shell and returns any output as rows of text. Grants nonadministrative users permissions to execute xp_cmdshell.
28. What is use of shrink database?
Microsoft® SQL Server 2000 allows each file within a database to be shrunk to remove unused pages. Both data and transaction log files can be shrunk.
29. If the performance of the query suddenly decreased where you will check?
30. What is a pass-through query?
Microsoft® SQL Server 2000 sends pass-through queries as un-interpreted query strings to an OLE DB data source. The query must be in a syntax the OLE DB data source will accept. A Transact-SQL statement uses the results from a pass-through query as though it is a regular table reference.
This example uses a pass-through query to retrieve a result set from a Microsoft Access version of the Northwind sample database.
SELECT *
FROM OpenRowset('Microsoft.Jet.OLEDB.4.0',
'c:\northwind.mdb';'admin'; '',
'SELECT CustomerID, CompanyName
FROM Customers
WHERE Region = ''WA'' ')
31. How do you differentiate Local and Global Temporary table?
You can create local and global temporary tables. Local temporary tables are visible only in the current session; global temporary tables are visible to all sessions. Prefix local temporary table names with single number sign (#table_name), and prefix global temporary table names with a double number sign (##table_name). SQL statements reference the temporary table using the value specified for table_name in the CREATE TABLE statement:
CREATE TABLE #MyTempTable (cola INT PRIMARY KEY)
INSERT INTO #MyTempTable VALUES (1)
32. How the Exists keyword works in SQL Server?
USE pubs
SELECT au_lname, au_fname
FROM authors
WHERE exists
(SELECT *
FROM publishers
WHERE authors.city = publishers.city)
When a subquery is introduced with the keyword EXISTS, it functions as an existence test. The WHERE clause of the outer query tests for the existence of rows returned by the subquery. The subquery does not actually produce any data; it returns a value of TRUE or FALSE.
33. ANY?
USE pubs
SELECT au_lname, au_fname
FROM authors
WHERE city = ANY
(SELECT city
FROM publishers)
34. to select date part only
SELECT CONVERT(char(10),GetDate(),101)
--to select time part only
SELECT right(GetDate(),7)
35. How can I send a message to user from the SQL Server?
You can use the xp_cmdshell extended stored procedure to run net send command. This is the example to send the 'Hello' message to JOHN:
EXEC master..xp_cmdshell "net send JOHN 'Hello'"
To get net send message on the Windows 9x machines, you should run the WinPopup utility. You can place WinPopup in the Startup group under Program Files.
36. What is normalization? Explain different levels of normalization? Explain Third normalization form with an example?
The process of refining tables, keys, columns, and relationships to create an efficient database is called normalization. This should eliminates unnecessary duplication and provides a rapid search path to all necessary information.
Some of the benefits of normalization are:

* Data integrity (because there is no redundant, neglected data)
* Optimized queries (because normalized tables produce rapid, efficient joins)
* Faster index creation and sorting (because the tables have fewer columns)
* Faster UPDATE performance (because there are fewer indexes per table)
* Improved concurrency resolution (because table locks will affect less data)
* Eliminate redundancy

There are a few rules for database normalization. Each rule is called a "normal form." If the first rule is observed, the database is said to be in "first normal form." If the first three rules are observed, the database is considered to be in "third normal form." Although other levels of normalization are possible, third normal form is considered the highest level necessary for most applications.

6. First Normal Form (1NF)
* Eliminate repeating groups in individual tables
* Create a separate table for each set of related data.
* Identify each set of related data with a primary key.

Do not use multiple fields in a single table to store similar data.
Example



Subordinate1


Subordinate2


Subordinate3


Subordinate4

Bob


Jim


Mary


Beth


Mary


Mike


Jason


Carol


Mark

Jim


Alan






Eliminate duplicative columns from the same table. Clearly, the Subordinate1-Subordinate4 columns are duplicative. What happens when we need to add or remove a subordinate?



Subordinates

Bob


Jim, Mary, Beth

Mary


Mike, Jason, Carol, Mark

Jim


Alan

This solution is closer, but it also falls short of the mark. The subordinates column is still duplicative and non-atomic. What happens when we need to add or remove a subordinate? We need to read and write the entire contents of the table. That’s not a big deal in this situation, but what if one manager had one hundred employees? Also, it complicates the process of selecting data from the database in future queries.
Solution:



Subordinate

Bob


Jim

Bob


Mary

Bob


Beth

Mary


Mike

Mary


Jason

Mary


Carol

Mary


Mark

Jim


Alan

7. Second Normal Form (2NF)
* Create separate tables for sets of values that apply to multiple records.
* Relate these tables with a foreign key.

Records should not depend on anything other than a table's primary key (a compound key, if necessary).
For example, consider a customer's address in an accounting system. The address is needed by the Customers table, but also by the Orders, Shipping, Invoices, Accounts Receivable, and Collections tables. Instead of storing the customer's address as a separate entry in each of these tables, store it in one place, either in the Customers table or in a separate Addresses table.

8. Third Normal Form (3NF)
* Eliminate fields that do not depend on the key.

Values in a record that are not part of that record's key do not belong in the table. In general, any time the contents of a group of fields may apply to more than a single record in the table, consider placing those fields in a separate table.
For example, in an Employee Recruitment table, a candidate's university name and address may be included. But you need a complete list of universities for group mailings. If university information is stored in the Candidates table, there is no way to list universities with no current candidates. Create a separate Universities table and link it to the Candidates table with a university code key.
Another Example :

MemberId


Name


Company


CompanyLoc

1


John Smith


ABC


Alabama

2


Dave Jones


MCI


Florida

The Member table satisfies first normal form - it contains no repeating groups. It satisfies second normal form - since it doesn't have a multivalued key. But the key is MemberID, and the company name and location describe only a company, not a member. To achieve third normal form, they must be moved into a separate table. Since they describe a company, CompanyCode becomes the key of the new "Company" table.

The motivation for this is the same for second normal form: we want to avoid update and delete anomalies. For example, suppose no members from the IBM were currently stored in the database. With the previous design, there would be no record of its existence, even though 20 past members were from IBM!
Member Table

MemberId


Name


CID

1


John Smith


1

2


Dave Jones


2


Company Table

CId


Name


Location

1


ABC


Alabama

2


MCI


Florida

9. Boyce-Codd Normal Form (BCNF)
A relation is in Boyce/Codd normal form if and only if the only determinants are candidate key. Its a different version of 3NF, indeed, was meant to replace it. [A determinant is any attribute on which some other attribute is (fully) functionally dependent.]
10. 4th Normal Form (4NF)
A table is in 4NF if it is in BCNF and if it has no multi-valued dependencies. This applies primarily to key-only associative tables, and appears as a ternary relationship, but has incorrectly merged 2 distinct, independent relationships.
Eg: This could be any 2 M:M relationships from a single entity. For instance, a member could know many software tools, and a software tool may be used by many members. Also, a member could have recommended many books, and a book could be recommended by many members.

Software




member




Book

11. The correct solution, to cause the model to be in 4th normal form, is to ensure that all M:M relationships are resolved independently if they are indeed independent.

Software




membersoftware




member




memberBook




book

12. 5th Normal Form (5NF)(PJNF)
A table is in 5NF, also called "Projection-Join Normal Form", if it is in 4NF and if every join dependency in the table is a consequence of the candidate keys of the table.
13. Domain/key normal form (DKNF). A key uniquely identifies each row in a table. A domain is the set of permissible values for an attribute. By enforcing key and domain restrictions, the database is assured of being freed from modification anomalies. DKNF is the normalization level that most designers aim to achieve.

**
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.

37. If a database is normalized by 3 NF then how many number of tables it should contain in minimum? How many minimum if 2NF and 1 NF?
38. What is denormalization and when would you go for it?
As the name indicates, denormalization is the reverse process of normalization. It's the controlled introduction of redundancy in to the database design. It helps improve the query performance as the number of joins could be reduced.
39. How can I randomly sort query results?
To randomly order rows, or to return x number of randomly chosen rows, you can use the RAND function inside the SELECT statement. But the RAND function is resolved only once for the entire query, so every row will get same value. You can use an ORDER BY clause to sort the rows by the result from the NEWID function, as the following code shows:
SELECT *
FROM Northwind..Orders
ORDER BY NEWID()
40. sp_who
Provides information about current Microsoft® SQL Server™ users and processes. The information returned can be filtered to return only those processes that are not idle.
41. Have you worked on Dynamic SQL? How will You handled “ (Double Quotes) in Dynamic SQL?
42. How to find dependents of a table?
Verify dependencies with sp_depends before dropping an object
43. What is the difference between a CONSTRAINT AND RULE?
Rules are a backward-compatibility feature that perform some of the same functions as CHECK constraints. CHECK constraints are the preferred, standard way to restrict the values in a column. CHECK constraints are also more concise than rules; there can only be one rule applied to a column, but multiple CHECK constraints can be applied. CHECK constraints are specified as part of the CREATE TABLE statement, while rules are created as separate objects and then bound to the column.
44. How to call a COM dll from SQL Server 2000?
sp_OACreate - Creates an instance of the OLE object on an instance of Microsoft® SQL Server
Syntax
sp_OACreate progid, | clsid,
objecttoken OUTPUT
[ , context ]

context - Specifies the execution context in which the newly created OLE object runs. If specified, this value must be one of the following:
1 = In-process (.dll) OLE server only
4 = Local (.exe) OLE server only
5 = Both in-process and local OLE server allowed
Examples
A. Use Prog ID - This example creates a SQL-DMO SQLServer object by using its ProgID.

DECLARE @object int

DECLARE @hr int

DECLARE @src varchar(255), @desc varchar(255)

EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT

IF @hr <> 0

BEGIN

EXEC sp_OAGetErrorInfo @object, @src OUT, @desc OUT

SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc

RETURN

END

B. Use CLSID - This example creates a SQL-DMO SQLServer object by using its CLSID.

DECLARE @object int

DECLARE @hr int

DECLARE @src varchar(255), @desc varchar(255)

EXEC @hr = sp_OACreate '{00026BA1-0000-0000-C000-000000000046}',

@object OUT

IF @hr <> 0

BEGIN

EXEC sp_OAGetErrorInfo @object, @src OUT, @desc OUT

SELECT hr=convert(varbinary(4),@hr), Source=@src, Description=@desc

RETURN

END

45. Difference between sysusers and syslogins?
sysusers - Contains one row for each Microsoft® Windows user, Windows group, Microsoft SQL Server™ user, or SQL Server role in the database.
syslogins - Contains one row for each login account.
46. What is the row size in SQL Server 2000?
8060 bytes.
47. How will you find structure of table, all tables/views in one db, all dbs?
//structure of table
sp_helpdb tbl_emp

//list of all databases
sp_helpdb
OR
SELECT * FROM master.dbo.sysdatabases

//details about database pubs. .mdf, .ldf file locations, size of database
sp_helpdb pubs

//lists all tables under current database
sp_tables
OR
SELECT * FROM information_schema.tables WHERE (table_type = 'base table')
OR
SELECT * FROM sysobjects WHERE type = 'U' //faster
48. B-tree indexes or doubly-linked lists?
49. What is the system function to get the current user's user id?
USER_ID(). Also check out other system functions like USER_NAME(), SYSTEM_USER, SESSION_USER, CURRENT_USER, USER, SUSER_SID(), HOST_NAME().
50. What are the series of steps that happen on execution of a query in a Query Analyzer?
1) Syntax checking 2) Parsing 3) Execution plan
51. Which event (Check constraints, Foreign Key, Rule, trigger, Primary key check) will be performed last for integrity check?
Identity Insert Check
Nullability constraint
Data type check
Instead of trigger
Primary key
Check constraint
Foreign key
DML Execution (update statements)
After Trigger
**
52. How will you show many to many relation in sql?
Create 3rd table with 2 columns which having one to many relation to these tables.
53. When a query is sent to the database and an index is not being used, what type of execution is taking place?
A table scan.
54. What is #, ##, @, @@ means?
@@ - System variables
@ - user defined variables
55. What is the difference between a Local temporary table and a Global temporary table? How is each one denoted?
Local temporary table will be accessible to only current user session, its name will be preceded with a single hash (#mytable)
Global temporary table will be accessible to all users, & it will be dropped only after ending of all active connections, its name will be preceded with double hash (##mytable)
56. What is covered queries in SQL Server?
57. What is HASH JOIN, MERGE JOIN?

TOOLS

58. Have you ever used DBCC command? Give an example for it.
The Transact-SQL programming language provides DBCC statements that act as Database Console Commands for Microsoft® SQL Serve 2000. These statements check the physical and logical consistency of a database. Many DBCC statements can fix detected problems. Database Console Command statements are grouped into these categories.

Statement category


Perform

Maintenance statements


Maintenance tasks on a database, index, or filegroup.

Miscellaneous statements


Miscellaneous tasks such as enabling row-level locking or removing a dynamic-link library (DLL) from memory.

Status statements


Status checks.

Validation statements


Validation operations on a database, table, index, catalog, filegroup, system tables, or allocation of database pages.

DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKCATALOG, DBCC CHECKALLOC, DBCC SHOWCONTIG, DBCC SHRINKDATABASE, DBCC SHRINKFILE etc.

59. How do you use DBCC statements to monitor various aspects of a SQL server installation?
**
60. What is the output of DBCC Showcontig statement?
Displays fragmentation information for the data and indexes of the specified table.
61. How do I reset the identity column?
You can use the DBCC CHECKIDENT statement, if you want to reset or reseed the identity column. For example, if you need to force the current identity value in the jobs table to a value of 100, you can use the following:
USE pubs
GO
DBCC CHECKIDENT (jobs, RESEED, 100)
GO
62. About SQL Command line executables

Utilities

bcp
console
isql
sqlagent
sqldiag
sqlmaint
sqlservr
vswitch

dtsrun
dtswiz
isqlw
itwiz
odbccmpt
osql
rebuildm
sqlftwiz

distrib
logread
replmerg
snapshot

scm

regxmlss

63. What is DTC?
The Microsoft Distributed Transaction Coordinator (MS DTC) is a transaction manager that allows client applications to include several different sources of data in one transaction. MS DTC coordinates committing the distributed transaction across all the servers enlisted in the transaction.
64. What is DTS? Any drawbacks in using DTS?
Microsoft® SQL Server™ 2000 Data Transformation Services (DTS) is a set of graphical tools and programmable objects that lets you extract, transform, and consolidate data from disparate sources into single or multiple destinations.
65. What is BCP?
The bcp utility copies data between an instance of Microsoft® SQL Server™ 2000 and a data file in a user-specified format.
C:\Documents and Settings\sthomas>bcp
usage: bcp {dbtable | query} {in | out | queryout | format} datafile
[-m maxerrors] [-f formatfile] [-e errfile]
[-F firstrow] [-L lastrow] [-b batchsize]
[-n native type] [-c character type] [-w wide character type]
[-N keep non-text native] [-V file format version] [-q quoted identifier]
[-C code page specifier] [-t field terminator] [-r row terminator]
[-i inputfile] [-o outfile] [-a packetsize]
[-S server name] [-U username] [-P password]
[-T trusted connection] [-v version] [-R regional enable]
[-k keep null values] [-E keep identity values]
[-h "load hints"]
66. How can I create a plain-text flat file from SQL Server as input to another application?
One of the purposes of Extensible Markup Language (XML) is to solve challenges like this, but until all applications become XML-enabled, consider using our faithful standby, the bulk copy program (bcp) utility. This utility can do more than just dump a table; bcp also can take its input from a view instead of from a table. After you specify a view as the input source, you can limit the output to a subset of columns or to a subset of rows by selecting appropriate filtering (WHERE and HAVING) clauses.
More important, by using a view, you can export data from multiple joined tables. The only thing you cannot do is specify the sequence in which the rows are written to the flat file, because a view does not let you include an ORDER BY clause in it unless you also use the TOP keyword.
If you want to generate the data in a particular sequence or if you cannot predict the content of the data you want to export, be aware that in addition to a view, bcp also supports using an actual query. The only "gotcha" about using a query instead of a table or view is that you must specify queryout in place of out in the bcp command line.
For example, you can use bcp to generate from the pubs database a list of authors who reside in California by writing the following code:
bcp "SELECT * FROM pubs..authors WHERE state = 'CA'" queryout c:\CAauthors.txt -c -T -S
67. What are the different ways of moving data/databases between servers and databases in SQL Server?
There are lots of options available, you have to choose your option depending upon your requirements. Some of the options you have are: BACKUP/RESTORE, detaching and attaching databases, replication, DTS, BCP, logshipping, INSERT...SELECT, SELECT...INTO, creating INSERT scripts to generate data.
68. How will I export database?
Through DTS - Import/Export wizard
Backup - through Complete/Differential/Transaction Log
69. How to export database at a particular time, every week?
Backup - Schedule
DTS - Schedule
Jobs - create a new job
70. How do you load large data to the SQL server database?
bcp
71. How do you transfer data from text file to database (other than DTS)?
bcp
72. What is OSQL and ISQL utility?
The osql utility allows you to enter Transact-SQL statements, system procedures, and script files. This utility uses ODBC to communicate with the server.
The isql utility allows you to enter Transact-SQL statements, system procedures, and script files; and uses DB-Library to communicate with Microsoft® SQL Server™ 2000.
All DB-Library applications, such as isql, work as SQL Server 6.5–level clients when connected to SQL Server 2000. They do not support some SQL Server 2000 features.
The osql utility is based on ODBC and does support all SQL Server 2000 features. Use osql to run scripts that isql cannot run.
73. What Tool you have used for checking Query Optimization? What is the use of profiler in sql server? What is the first thing u look at in a SQL Profiler?
SQL Profiler is a graphical tool that allows system administrators to monitor events in an instance of Microsoft® SQL Server™. You can capture and save data about each event to a file or SQL Server table to analyze later. For example, you can monitor a production environment to see which stored procedures is hampering performance by executing too slowly.
Use SQL Profiler to:

* Monitor the performance of an instance of SQL Server.
* Debug Transact-SQL statements and stored procedures.
* Identify slow-executing queries.
* Test SQL statements and stored procedures in the development phase of a project by single-stepping through statements to confirm that the code works as expected.
* Troubleshoot problems in SQL Server by capturing events on a production system and replaying them on a test system. This is useful for testing or debugging purposes and allows users to continue using the production system without interference.

Audit and review activity that occurred on an instance of SQL Server. This allows a security administrator to review any of the auditing events, including the success and failure of a login attempt and the success and failure of permissions in accessing statements and objects.

Permissions

74. A user is a member of Public role and Sales role. Public role has the permission to select on all the table, and Sales role, which doesn’t have a select permission on some of the tables. Will that user be able to select from all tables?
**
75. If a user does not have permission on a table, but he has permission to a view created on it, will he be able to view the data in table?
Yes.
76. Describe Application Role and explain a scenario when you will use it?
**
77. After removing a table from database, what other related objects have to be dropped explicitly?
(view, SP)
78. You have a SP names YourSP and have the a Select Stmt inside the SP. You also have a user named YourUser. What permissions you will give him for accessing the SP.
**
79. Different Authentication modes in Sql server? If a user is logged under windows authentication mode, how to find his userid?
There are Three Different authentication modes in sqlserver.

0. Windows Authentication Mode
1. SqlServer Authentication Mode
2. Mixed Authentication Mode

“system_user” system function in sqlserver to fetch the logged on user name.

80. Give the connection strings from front-end for both type logins(windows,sqlserver)?
This are specifically for sqlserver not for any other RDBMS
Data Source=MySQLServer;Initial Catalog=NORTHWIND;Integrated Security=SSPI (windows)
Data Source=MySQLServer;Initial Catalog=NORTHWIND;Uid=” ”;Pwd=” ”(sqlserver)
81. What are three SQL keywords used to change or set someone’s permissions?
Grant, Deny and Revoke

Administration
82. Explain the architecture of SQL Server?
**
83. Different types of Backups?

* A full database backup is a full copy of the database.
* A transaction log backup copies only the transaction log.
* A differential backup copies only the database pages modified after the last full database backup.
* A file or filegroup restore allows the recovery of just the portion of a database that was on the failed disk.
83. What are ‘jobs’ in SQL Server? How do we create one? What is tasks?
Using SQL Server Agent jobs, you can automate administrative tasks and run them on a recurring basis.
**
84. What is database replication? What are the different types of replication you can set up in SQL Server? How are they used? What is snapshot replication how is it different from Transactional replication?
Replication is the process of copying/moving data between databases on the same or different servers. SQL Server supports the following types of replication scenarios:

0. Snapshot replication - It distributes data exactly as it appears at a specific moment in time and doesn’t monitor for updates. It can be used when data changes are infrequent. It is often used for browsing data such as price lists, online catalog, or data for decision support where the current data is not required and data is used as read only.
1. Transactional replication (with immediate updating subscribers, with queued updating subscribers) - With this an initial snapshot of data is applied, and whenever data modifications are made at the publisher, the individual transactions are captured and propagated to the subscribers.
2. Merge replication - It is the process of distributing the data between publisher and subscriber, it allows the publisher and subscriber to update the data while connected or disconnected, and then merging the updates between the sites when they are connected.
85. How can u look at what are the process running on SQL server? How can you kill a process in SQL server?

* Expand a server group, and then expand a server.
* Expand Management, and then expand Current Activity.
* Click Process Info. The current server activity is displayed in the details pane.

In the details pane, right-click a Process ID, and then click Kill Process.

87. What is RAID and what are different types of RAID configurations?
RAID stands for Redundant Array of Inexpensive Disks, used to provide fault tolerance to database servers. There are six RAID levels 0 through 5 offering different levels of performance, fault tolerance.
88.

Some of the tools/ways that help you troubleshooting performance problems are: SET SHOWPLAN_ALL ON, SET SHOWPLAN_TEXT ON, SET STATISTICS IO ON, SQL Server Profiler, Windows NT /2000 Performance monitor, Graphical execution plan in Query Analyzer.
89. How to determine the service pack currently installed on SQL Server?
The global variable @@Version stores the build number of the sqlservr.exe, which is used to determine the service pack installed.
eg: Microsoft SQL Server 2000 - 8.00.760 (Intel X86) Dec 17 2002 14:22:05 Copyright (c) 1988-2003 Microsoft Corporation Enterprise Edition on Windows NT 5.0 (Build 2195: Service Pack 3)
90. What is the purpose of using COLLATE in a query?
The term, collation, refers to a set of rules that determine how data is sorted and compared. In Microsoft® SQL Server 2000, it is not required to separately specify code page and sort order for character data, and the collation used for Unicode data. Instead, specify the collation name and sorting rules to use. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types, and character width. Microsoft SQL Server 2000 collations include these groupings:

* Windows collations - Windows collations define rules for storing character data based on the rules defined for an associated Windows locale. The base Windows collation rules specify which alphabet or language is used when dictionary sorting is applied, as well as the code page used to store non-Unicode character data. For Windows collations, the nchar, nvarchar, and ntext data types have the same sorting behavior as char, varchar, and text data types
* SQL collations - SQL collations are provided for compatibility with sort orders in earlier versions of Microsoft SQL Server.

Sort Order
Binary is the fastest sorting order, and is case-sensitive. If Binary is selected, the Case-sensitive, Accent-sensitive, Kana-sensitive, and Width-sensitive options are not available.

Sort order


Description

Binary


Sorts and compares data in Microsoft® SQL Server™ tables based on the bit patterns defined for each character. Binary sort order is case-sensitive, that is lowercase precedes uppercase, and accent-sensitive. This is the fastest sorting order.
If this option is not selected, SQL Server follows sorting and comparison rules as defined in dictionaries for the associated language or alphabet.

Case-sensitive


Specifies that SQL Server distinguish between uppercase and lowercase letters.
If not selected, SQL Server considers the uppercase and lowercase versions of letters to be equal. SQL Server does not define whether lowercase letters sort lower or higher in relation to uppercase letters when Case-sensitive is not selected.

Accent-sensitive


Specifies that SQL Server distinguish between accented and unaccented characters. For example, 'a' is not equal to 'รก'.
If not selected, SQL Server considers the accented and unaccented versions of letters to be equal.

Kana-sensitive


Specifies that SQL Server distinguish between the two types of Japanese kana characters: Hiragana and Katakana.
If not selected, SQL Server considers Hiragana and Katakana characters to be equal.

Width-sensitive


Specifies that SQL Server distinguish between a single-byte character (half-width) and the same character when represented as a double-byte character (full-width).
If not selected, SQL Server considers the single-byte and double-byte representation of the same character to be equal.

Windows collation options:

* Use Latin1_General for the U.S. English character set (code page 1252).
* Use Modern_Spanish for all variations of Spanish, which also use the same character set as U.S. English (code page 1252).
* Use Arabic for all variations of Arabic, which use the Arabic character set (code page 1256).
* Use Japanese_Unicode for the Unicode version of Japanese (code page 932), which has a different sort order from Japanese, but the same code page (932).
90. What is the STUFF Function and how does it differ from the REPLACE function?
STUFF - Deletes a specified length of characters and inserts another set of characters at a specified starting point.
SELECT STUFF('abcdef', 2, 3, 'ijklmn')
GO
Here is the result set:
---------
aijklmnef

REPLACE - Replaces all occurrences of the second given string expression in the first string expression with a third expression.
SELECT REPLACE('abcdefghicde','cde','xxx')
GO
Here is the result set:
------------
abxxxfghixxx

92. What does it mean to have quoted_identifier on? What are the implications of having it off?
When SET QUOTED_IDENTIFIER is OFF (default), literal strings in expressions can be delimited by single or double quotation marks.
When SET QUOTED_IDENTIFIER is ON, all strings delimited by double quotation marks are interpreted as object identifiers. Therefore, quoted identifiers do not have to follow the Transact-SQL rules for identifiers.
SET QUOTED_IDENTIFIER must be ON when creating or manipulating indexes on computed columns or indexed views. If SET QUOTED_IDENTIFIER is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail.
The SQL Server ODBC driver and Microsoft OLE DB Provider for SQL Server automatically set QUOTED_IDENTIFIER to ON when connecting.
When a stored procedure is created, the SET QUOTED_IDENTIFIER and SET ANSI_NULLS settings are captured and used for subsequent invocations of that stored procedure. When executed inside a stored procedure, the setting of SET QUOTED_IDENTIFIER is not changed.
SET QUOTED_IDENTIFIER OFF
GO
-- Attempt to create a table with a reserved keyword as a name
-- should fail.
CREATE TABLE "select" ("identity" int IDENTITY, "order" int)
GO

SET QUOTED_IDENTIFIER ON
GO
-- Will succeed.
CREATE TABLE "select" ("identity" int IDENTITY, "order" int)
GO
93. What is the purpose of UPDATE STATISTICS?
Updates information about the distribution of key values for one or more statistics groups (collections) in the specified table or indexed view.
94. Fundamentals of Data warehousing & olap?
95. What do u mean by OLAP server? What is the difference between OLAP and OLTP?
96. What is a tuple?
A tuple is an instance of data within a relational database.
97. Services and user Accounts maintenance
98. sp_configure commands?
Displays or changes global configuration settings for the current server.
99. What is the basic functions for master, msdb, tempdb databases?
Microsoft® SQL Server 2000 systems have four system databases:
* master - The master database records all of the system level information for a SQL Server system. It records all login accounts and all system configuration settings. master is the database that records the existence of all other databases, including the location of the database files.
* tempdb - tempdb holds all temporary tables and temporary stored procedures. It also fills any other temporary storage needs such as work tables generated by SQL Server. tempdb is re-created every time SQL Server is started so the system starts with a clean copy of the database.
By default, tempdb autogrows as needed while SQL Server is running. If the size defined for tempdb is small, part of your system processing load may be taken up with autogrowing tempdb to the size needed to support your workload each time to restart SQL Server. You can avoid this overhead by using ALTER DATABASE to increase the size of tempdb.
* model - The model database is used as the template for all databases created on a system. When a CREATE DATABASE statement is issued, the first part of the database is created by copying in the contents of the model database, then the remainder of the new database is filled with empty pages. Because tempdb is created every time SQL Server is started, the model database must always exist on a SQL Server system.
* msdb - The msdb database is used by SQL Server Agent for scheduling alerts and jobs, and recording operators.
100. What are sequence diagrams? What you will get out of this sequence diagrams?
Sequence diagrams document the interactions between classes to achieve a result, such as a use case. Because UML is designed for object-oriented programming, these communications between classes are known as messages. The sequence diagram lists objects horizontally, and time vertically, and models these messages over time.
101. What are the new features of SQL 2000 than SQL 7? What are the new datatypes in sql?
* XML Support - The relational database engine can return data as Extensible Markup Language (XML) documents. Additionally, XML can also be used to insert, update, and delete values in the database. (for xml raw - to retrieve output as xml type)
* User-Defined Functions - The programmability of Transact-SQL can be extended by creating your own Transact-SQL functions. A user-defined function can return either a scalar value or a table.
* Indexed Views - Indexed views can significantly improve the performance of an application where queries frequently perform certain joins or aggregations. An indexed view allows indexes to be created on views, where the result set of the view is stored and indexed in the database.
* New Data Types - SQL Server 2000 introduces three new data types. bigint is an 8-byte integer type. sql_variant is a type that allows the storage of data values of different data types. table is a type that allows applications to store results temporarily for later use. It is supported for variables, and as the return type for user-defined functions.
* INSTEAD OF and AFTER Triggers - INSTEAD OF triggers are executed instead of the triggering action (for example, INSERT, UPDATE, DELETE). They can also be defined on views, in which case they greatly extend the types of updates a view can support. AFTER triggers fire after the triggering action. SQL Server 2000 introduces the ability to specify which AFTER triggers fire first and last.
* Multiple Instances of SQL Server - SQL Server 2000 supports running multiple instances of the relational database engine on the same computer. Each computer can run one instance of the relational database engine from SQL Server version 6.5 or 7.0, along with one or more instances of the database engine from SQL Server 2000. Each instance has its own set of system and user databases.
* Index Enhancements - You can now create indexes on computed columns. You can specify whether indexes are built in ascending or descending order, and if the database engine should use parallel scanning and sorting during index creation.
102. How do we open SQL Server in single user mode?
We can accomplish this in any of the three ways given below :-

0. From Command Prompt :-
sqlservr -m
1. From Startup Options :-
Go to SQL Server Properties by right-clicking on the Server name in the Enterprise manager.
Under the 'General' tab, click on 'Startup Parameters'.
Enter a value of -m in the Parameter.
2. From Registry :-
Go to HKEY_LOCAL_MACHINE\Software\Microsoft\MSSQLServer\MSSQLServer\Parameters.
Add new string value.
Specify the 'Name' as SQLArg(n) & 'Data' as -m.
Where n is the argument number in the list of arguments.
102. Difference between clustering and NLB (Network Load Balancing)?
**
103. Explain Active/Active and Active/Passive cluster configurations?
**
104. What is Log Shipping?
In Microsoft® SQL Server™ 2000 Enterprise Edition, you can use log shipping to feed transaction logs from one database to another on a constant basis. Continually backing up the transaction logs from a source database and then copying and restoring the logs to a destination database keeps the destination database synchronized with the source database. This allows you to have a backup server and also provides a way to offload query processing from the main computer (the source server) to read-only destination servers.
105. What are the main steps you take care for enhancing SQL Server performance?
**
106. You have to check whether any users are connected to sql server database and if any user is connected to database, you have to disconnect the user(s) and run a process in a job. How do you do the above in a job?
**
XML
107. How can I convert data in a Microsoft Access table into XML format?
The following applications can help you convert Access data into XML format: Access 2002, ADO 2.5, and SQLXML. Access 2002 (part of Microsoft Office XP) enables you to query or save a table in XML format. You might be able to automate this process. ADO 2.5 and later enables you to open the data into a recordset, then persist the recordset in XML format, as the following code shows:
rs.Save "c:\rs.xml", adPersistXML
You can use linked servers to add the Access database to your SQL Server 2000 database so you can run queries from within SQL Server to retrieve data. Then, through HTTP, you can use the SQLXML technology to extract the Access data in the XML format you want.

NEW
108. @@IDENTITY ?
Ans: Returns the last-inserted identity value.
109. If a job is fail in sql server, how do find what went wrong?
110. Have you used Error handling in DTS?

0 comments: