SQL Server 2014 is jockeying for King of OLTP.
While completely
bypassing BI enhancements, Microsoft has attacked a host of OLTP
performance problems from many angles: slow disk performance with
in-memory tables, slow log performance with delayed durability, and
maintenance concurrency with lock priority. It didn't stop there. The
new SQL Server release puts the kibosh on excessive I/O with new
Resource Governor I/O control, addresses memory issues with an SSD
buffer pool extension, and increases availability through integration
with the Azure cloud.
In-Memory OLTP
For
OLTP, the most exciting new feature in SQL Server 2014 is In-Memory OLTP
(aka "Hekaton"), which allows you to move individual tables to special
in-memory structures. The performance boost can be as huge as 30x. There
are a number of limitations and special requirements for these tables,
so they won't work under every circumstance. But when they do, your OLTP
performance will go through the roof. This is better than other
in-memory solutions that require the entire database to be placed in
memory. You can get more performance by converting existing stored
procedures into in-memory procedures, too. You'll need to test to make
sure your tables are compatible, but if they are, you'll love this
feature.
Managed Backup to Azure
There
are plenty of small and midsize shops without qualified DBAs on staff.
Quite often these shops don't discover that their backups aren't being
handled properly until it's too late. True to its name, Managed Backup
automatically backs up your database (or your instance) based on your
defined recovery interval and workload patterns. When the system
determines the data has changed significantly enough, it takes a backup
to Azure. This feature only works with Azure blob storage. But since
your backups are already offsite, there's no need to worry about tapes.
Azure VMs for Availability replicas
With
SQL Server 2014, you can define an Availability Group replica that
resides in Azure. When a primary failure happens, you have to fail over
manually, but you will be up and running very quickly. And as long as
your primary is online you can still push your reporting to the Azure
replica to offload that activity from production. If you need reliable,
off-site HA but don't have a second site, then this feature is for you.
Just pick the location you'd like when you create the Azure VM, and
you're set.
SQL Server Data Files in Azure
Data
Files in Azure is just what it sounds like: Your database runs locally
in your data center, while the database files themselves live in an
Azure blob container. This can offer advantages in DR and migration. But
depending on the size of the database and its workload, the potential
performance cost of pushing the data for every transaction across the
Internet could be prohibitive. A better use of this feature may be to
store the data files in an Azure VM in the same data center. This can
also get you around the current limitation of having only 16 mounted
disks in an Azure VM.
Updateable columnstore indexes
Columnstore
indexes in SQL Server 2014 brought a dramatic boost to data warehouse
performance, but with a hitch: They couldn't be updated. With SQL Server
2014, now they can. This means you no longer have to drop and re-create
columnstore indexes every time you need to load your warehouse tables.
Not only that, but updateability also means you may be able to look at
columnstore indexes for certain OLTP applications. The caveat is that
you must have a clustered columnstore index on the table. Non-clustered
columnstores aren't supported.
Resource Governor for I/O
Disk
I/O is typically the most constrained resource of a database system,
and often a large or rogue query will take up more precious I/O
resources than you can afford. Microsoft has finally given us some
control over runaway I/O. With Resource Governor for I/O, you can now
put queries into their own resource pool and limit the amount of I/O per
volume they're allowed. MIN_IOPS_PER_VOLUME
and MAX_IOPS_PER_VOLUME
set the minimum and maximum reads or writes per second allowed by a process in a disk volume.
Resource Governor I/O control, continued
MIN_IOPS_PER_VOLUME
reserves a minimum number of I/O transactions per second, while MAX_IOPS_PER_VOLUME
provides a maximum number. This maximum doesn't limit the number of I/O
operations a query can perform, but merely keeps it from monopolizing a
disk. This way your large queries can still run, but other things will
run as well. A good use of I/O control is to reserve some IOPS for
administrators to be able to investigate issues when the disks are
overloaded.
Delayed durability
In
SQL Server, changes to data are written to the log first. This is
called write ahead logging (WAL). Control isn't returned to the
application until the log record has been written to disk (a process
referred to as "hardening"). Delayed durability allows you to return
control back to the application before the log is hardened. This can
speed up transactions if you have issues with log performance. Nothing
is free, though, and here you sacrifice recoverability. Should the
database go down before the log is committed to disk, then you lose
those transactions forever. It may be worth the risk if your log
performance is severely degrading application response times.
SSD buffer pool extension
Creating
a buffer pool extension for SQL Server 2014 is like being able to
define a different page file in Windows. As data pages move into memory,
they begin to fill up the buffer pool. If the buffer pool fills up, the
less frequently used pages will be paged to disk. Then when they're
needed again, they'll be swapped with something else in the buffer pool
and moved back into memory. The buffer pool extension option allows you
to define an SSD as a buffer file location. Because SSD is so much
faster than spinning disk, the paging is considerably quicker, which
increases performance dramatically in some cases. You can define a
buffer pool extension file up to 32 times the size of your memory.
Incremental statistics
Updating
statistics in SQL Server is the very definition of redundant work.
Whenever statistics need to be rebuilt, you can't just update the new
items -- you have to update everything. This means that a table with 200
million rows and only 40 million changes will need to update all 200
million rows in order to pick up those changes. Incremental statistics
in SQL Server 2014 allow you to update just those rows that have changed
and merge them with what's already there. This can have a big impact on
query performance in some configurations.
Lock priority of online operations
You
can now specify a lock priority for online re-indexing. In previous
versions of SQL Server, long-running queries could block re-indexing
operations, chewing up your maintenance window while your re-index op
sits waiting, doing nothing. In SQL Server 2014, you can specify how
your re-index operation will handle being blocked. You specify how long
it will wait and what to do when the wait is over. Will you have it
follow traditional behavior and wait indefinitely? Will you have it
terminate and move to the next table? Or will you kill the blocking
query, so your re-indexing can complete? It's your choice.* content copied from www.infoworld.com
No comments:
Post a Comment
Dharamart.blogspot.in