Microsoft SQL Server 2012 Management and Administration
Their best-selling series includes: It was founded in the year by Howard W. They started with producing radio schematics and repair manuals. Certified Buyer , Jajpur. Educational and Professional Books. Usually delivered in days? Kline Shirmattie Seenarine Ross Mistry. About Sams Publishing Sams Publishing is an organization that is dedicated to publishing technical training manuals.
Management and Administration 2nd Edi The Android Developer's Cookbook: Building Applications with the A transaction that reads two blocks off of the RAID 10 set would be much faster than a single disk read, because the blocks are striped and could be read simultaneously. So, both writes and reads are typically faster than any other option presented here. In addition to speed of both read and write operations, RAID10 sets are more fault tolerant than any other configuration. But RAID10 most obvious and dramatic drawback is the cost of requiring at least four disks. The disks themselves are costly, but they also consume plenty of space and energy as well.
Microsoft SQL Server Management and Administration, 2nd Edition | InformIT
If you can afford them, RAID10 arrays are certainly the best option. But they constitute a much greater expense than DASD. Disks are typically configured and controlled using one of two popular standards: Servers using hard disks that adhere to these standards can be directly cabled to the disks, resulting in the acronym DASD, for Direct Attached Storage Device. Host bus adapters HBAs control the movement of data between the server motherboard and the hard disks.
HBAs typically include performance options, such as a built-in cache, which can improve performance by buffering writes to the hard disk and then sending them in a burst. HBAs with write-cache controllers are not always safe for use with databases.
Description
If a write-cache ever sustains a power failure, any write IOPS stored in the write-cache will vanish and be lost. As a precaution, many HBAs include a built-in battery backup. But the battery backup is not always be able to recharge or lose its ability to hold a charge without disabling the write-cache. HBAs are also important because their firmware may need to be independently updated and maintained when faced with updates to Windows or other hardware components.
Remember to stay apprised of vendor recommendations and updates when using high-end HBAs. The Internet is full of information about digital storage. However, some of it is old, outdated, or simply not useful. Think of NAS servers as more of a file-and-print server technology, not suitable for SQL Server database files, transaction log files, or backup files. SAN is typically expensive and also has a degree of management overhead in that it should be set up, configured, and administrated only by a dedicated IT professional.
But what it loses in speed, it makes up in flexibility, redundant components, and manageability. In fact, it is easy for SAN administrators to virtualize storage so that it can be quickly moved around on-the-fly. This is great for SAN administrators, but it can be bad for DBAs if the storage the application has been depending upon is shared with another application in the enterprise. These caches can also be quickly and easily configured and reconfigured by SAN administrators to better balance and tune applications that use the SAN.
Because they are entirely electronic in nature, they offer significant savings in power consumption, speed, and resistance to damage from impact compared to hard disks. From a hardware perspective, a variety of different types of memory chips might be used within the SSD. Volatile memory loses data when it loses power, whereas nonvolatile memory does not lose data when there is no power.
SSDs, however, have a few special considerations. First, the memory blocks within an SSD can be erased and rewritten a limited number of times. Second, SSDs require a lot of free memory blocks to perform write operations. Finally, whereas SQL Server indexes residing on hard disks need frequent defragmentation, indexes residing on SSDs have no such requirement. Because all memory blocks on the SSD are only a few electrons away, all read access is pretty much the same speed whether the index pages are contiguous or not.
Each spindle can only do one activity at a time. But we very often ask spindles to do contradictory work in a database application, such as performing a long serial read at the same time other users are asking it to do a lot of small, randomized writes. Any time a spindle is asked to do contradictory work, it simply takes much longer to finish the requests. On the other hand, when we ask disks to perform complementary work and segregate the contrary work off to a separate set of disks, performance improves dramatically.
For example, a SQL Server database will always have at least two files: It is not uncommon to see very busy production databases with a lot of files, each on a different disk array. On the hardware side of the equation, DBAs might reconfigure a specific drive for instance, the F: They might increase the amount of read and write cache available on the hard disk controller s or SAN. The following section is topical in approach.
Rather than describe all the administrative functions and capabilities of a certain screen, such as the Database Settings page in the SSMS Object Explorer, this section provides a top-down view of the most important considerations when designing the storage for an instance of SQL Server and how to achieve maximum performance, scalability, and reliability. SQL Server storage is centered on databases, although a few settings are adjustable at the instance-level. So, great importance is placed on proper design and management of database files.
Prescriptive guidance also tells important ways to optimize the use of filegroups in SQL Server Whenever a database is created on an instance of SQL Server , a minimum of two database files are required: By default, SQL Server will create a single database file and transaction log file on the same default destination disk. Under this configuration, the data file is called the Primary data file and has the. The log file has a file extension of. These added data files are called Secondary files and typically use the.
When you have an instance of SQL Server that does not have a high performance requirement, a single disk probably provides adequate performance. The following sections address important proscriptive guidance concerning data files. First, design tips and recommendations are provided for where on disk to place database files, as well as the optimal number of database files to use for a particular production database. At this stage of the design process, imagine that you have a user database that has only one data file and one log file.
So, if we can place the user data file s and log files onto separate disks, where is the best place to put them? Database files should reside only on RAID volumes to provide fault tolerance and availability while increasing performance. As mentioned earlier, SQL Server defaults to the creation of a single primary data file and a single primary log file when creating a new database.
Microsoft SQL Server 2012 Management and Administration, 2nd Edition
The log file contains the information needed to make transactions and databases fully recoverable. Also, for this reason, adding additional files to a transaction log almost never improves performance. Conversely, data files contain the tables along with the data they contain , indexes, views, constraints, stored procedures, and so on. The general rule for this technique is to create one data file for every two to four logical processors available on the server. If a server had two four-core CPUs, for a total of eight logical CPUs, an important user database might do well to have four data files.
The newer and faster the CPU, the higher the ratio to use. A brand-new server with two four-core CPUs might do best with just two data files. Also note that this technique offers improving performance with more data files, but it does plateau at either 4, 8, or in rare cases 16 data files. Thus, a commodity server might show improving performance on user databases with two and four data files, but stops showing any improvement using more than four data files.
Your mileage may vary, so be sure to test any changes in a nonproduction environment before implementing them. Suppose we have a new database application, called BossData, coming online that is a very important production application. It is the only production database on the server, and according to the guidance provided earlier, we have configured the disks and database files like this:.
However, it occasionally slows down for no immediately evident reason. Why would that be? As it turns out, the size of multiple data files is also important. So far, so good. In a situation where BossData needs a total of Gb of storage, it would be much better to have eight Gb data files than to have six 50Gb data files and two Gb data files.
To see the latency of all of the data files, the log file, and the disks they reside on, use this query:. But in practice, a latency that is twice as high as the recommendations is often acceptable to most users. In this situation, estimate the amount of space required not only for operating the database in the near future, but estimate its total storage needs well into the future. Over-relying on the default autogrowth features causes two significant problems. First, growing a data file causes database operations to slow down while the new space is allocated and can lead to data files with widely varying sizes for a single database.
Second, constantly growing the data and log files typically leads to more logical fragmentation within the database and, in turn, performance degradation. Most experienced DBAs will also set the autogrow settings sufficiently high to avoid frequent autogrowths. For example, data file autogrow defaults to a meager 25Mb, which is certainly a very small amount of space for a busy OLTP database.
It is recommended to set these autogrow values to a considerable percentage size of the file expected at the one-year mark. So, for a database with Gb data file and 25GB log file expected at the one-year mark, you might set the autogrowth values to 10Gb and 2. We still recommend leaving the Autogrowth option enabled. You certainly do not want to ever have a data file and especially a log file run out of space during regular daily use. However, our recommendation is that you do not rely on the Autogrowth option to ensure the data files and log files have enough open space.
Preallocating the necessary space is a much better approach. Additionally, log files that have been subjected to many tiny, incremental autogrowths have been shown to underperform compared to log files with fewer, larger file growths. This chaining works seamlessly behind the scenes.
You can alternately use the following Transact-SQL syntax to modify the Autogrowth settings for a database file based on a growth rate of 10Gb and an unlimited maximum file size:. The prevailing best practice for autogrowth is to use an absolute number, such as Mb, rather than a percentage, because most DBAs prefer a very predictable growth rate on their data and transaction log files.
- Microsoft SQL Server 2012 Management and Administration.
- Das Vermächtnis des Ratsherrn: Historischer Roman (German Edition)?
- Who is Afraid of the Big Bad Wolf? (Werewolf Erotica).
- Computational Viscoelasticity (SpringerBriefs in Applied Sciences and Technology).
- God Bless America: The Origins of Over 1,500 Patriotic Words and Phrases!
- Prelude to Magic: The Prequel to Moonlight and Illusions!
- Buy Microsoft SQL Server Management and Administration - Microsoft Store.
Anytime SQL Server has to initialize a data or log file, it overwrites any residual data on the disk sectors that might be hanging around because of previously deleted files. This process fills the files with zeros and occurs whenever SQL Server creates a database, adds files to a database, expands the size of an existing log or data file through autogrow or a manual growth process, or due to a database or filegroup restore.
But when the files are large, file initialization can take quite a long time. It is possible to avoid full file initialization on data files through a technique call instant file initialization. Instead of writing the entire file to zeros, SQL Server will overwrite any existing data as new data is written to the file when instant file initialization is enabled. Instant file initialization does not work on log files, nor on databases where transparent data encryption is enabled.
Passar bra ihop
This is a Windows-level permission granted to members of the Windows Administrator group and to users with the Perform Volume Maintenance Task security policy. The Shrink Database task reduces the physical database and log files to a specific size. This operation removes excess space in the database based on a percentage value. In addition, you can enter thresholds in megabytes, indicating the amount of shrinkage that needs to take place when the database reaches a certain size and the amount of free space that must remain after the excess space is removed.
Free space can be retained in the database or released back to the operating system. It is a best practice not to shrink the database. First, when shrinking the database, SQL Server moves full pages at the end of data file s to the first open space it can find at the beginning of the file, allowing the end of the files to be truncated and the file to be shrunk. This process can increase the log file size because all moves are logged. Second, if the database is heavily used and there are many inserts, the data files may have to grow again. SQL and later addresses slow autogrowth with instant file initialization; therefore, the growth process is not as slow as it was in the past.
However, sometimes autogrow does not catch up with the space requirements, causing a performance degradation. Finally, simply shrinking the database leads to excessive fragmentation. If you absolutely must shrink the database, you should do it manually when the server is not being heavily utilized.
You can shrink a database by right-clicking a database and selecting Tasks, Shrink, and then Database or File. Alternatively, you can use Transact-SQL to shrink a database or file.
Microsoft® SQL Server 2012 Management and Administration, Second Edition by Ross Mistry
Although it is possible to shrink a log file, SQL Server is not able to shrink the log file past the oldest, active transaction. For example, imagine a 10Gb transaction log that has been growing at an alarming rate with the potential to fill up the disk soon.
If the last open transaction was written to the log file at the 7Gb mark, even if the space prior to that mark is essentially unused, the shrink process will not be able to shrink the log file to anything smaller than 7Gb. It is best practice not to select the option to shrink the database. First, when shrinking the database, SQL Server moves pages toward the beginning of the file, allowing the end of the files to be shrunk.
This process can increase the transaction log size because all moves are logged.
Second, if the database is heavily used and there are many inserts, the database files will have to grow again. SQL and above addresses slow autogrowth with instant file initialization; therefore, the growth process is not as slow as it was in the past. However, sometimes autogrow does not catch up with the space requirements, causing performance degradation.
Finally, constant shrinking and growing of the database leads to excessive fragmentation. If you need to shrink the database size, you should do it manually when the server is not being heavily utilized. The Database Properties dialog box is where you manage the configuration options and values of a user or system database. You can execute additional tasks from within these pages, such as database mirroring and transaction log shipping. The upcoming sections describe each page and setting in its entirety. To invoke the Database Properties dialog box, perform the following steps:. The second Database Properties page is called Files.
Use the Files page to configure settings pertaining to database files and transaction logs. You will spend time working in the Files page when initially rolling out a database and conducting capacity planning. A few things on the Options and Change Tracking tabs to keep in mind include the following:.
- .
- Introducing Penny Dreadnought, Insidious Indoctrination Engine of the Abominable Gentlemen.
- Prime Target.
- .
- Zitate, Sprüche, Lebensweisheiten: Sprüche und mehr... (German Edition).
- Index - Microsoft® SQL Server Management and Administration, Second Edition [Book].
- A Theory of Freedom: From the Psychology to the Politics of Agency.
Filegroups are used to house data files. Log files are never housed in filegroups. Every database has a primary filegroup, and additional secondary filegroups may be created at any time. The primary filegroup is also the default filegroup, although the default file group can be changed after the fact. Whenever a table or index is created, it will be allocated to the default filegroup unless another filegroup is specified.