Final chance to register for PASS SQLRally Nordic 2013

I'm attending SQLRally

Three years ago, some people had the crazy idea of arranging a viking-themed world class SQL Server / Business Intelligence conference in the cold and sparsely populated corner of the World: Sweden. It’s almost as crazy as Sweden’s efforts to host the Summer Olympic games, which Sweden hosted last time 1912 (except for the equestrian games which was held in Sweden 1956).

There are countless number of SQL Server databases in the world. The demand for SQL Server competence is huge, since millions of important applications are dependent on well-designed and well-managed SQL Server databases. SQL Server is also one of the most popular platforms in the world for building data warehouses and Business Intelligence solutions. By learning SQL Server you have fantastic job opportunities!

But how can you learn more about SQL Server when you probably already are very busy with your job? You should take the opportunity sometimes to get away from your daily duties and come back with new inspiration, new friends and new knowledge. What is then better than a really good conference?

Welcome to SQLRally

Year 2011, our crazy idea became reality: the world’s first PASS SQLRally conference outside of the US. We packed the conference center with almost 500 participants, fantastic speakers and had to close the registration one month before the event because it was fully booked! We had to cap the number of people because of fire insurance regulations (and placed some strong vikings to guard the entrance). Now we are back with PASS SQLRally Nordic 2013 with an even larger conference center, more speakers and an even more exciting programme. So don’t miss this!!! Take the final chance to register for this conference, which takes place in Stockholm 4-6 November 2013. The worst that could happen is that your boss says no…

SQLRally crowd

Watch video recordings from last PASS SQLRally Nordic here
http://www.sqlpass.org/sqlrally/2012/nordic/Agenda/VideoRecordings.aspx

Still not convinced? Check out all the fun at these photos!
http://www.flickr.com/search/?q=sqlrally+Nordic

e-Type

SQLRally vikings

Looking forward to see you!

Johan Åhlén
co-founder and organizer, PASS SQLRally Nordic

Comments Off

Filed under Technical

Specifying which login to use when connecting to a SQL Server using Windows Authentication

You have probably run across situations where you want to connect to a SQL Server using Management Studio and Windows Authentication, but with a different account than you are currently logged in with. As seen on the screenshot below, you don’t have the same possibility to enter a username and password as you have when using SQL Server Authentication.

Management Studio

There are a few options to change the user name.

Using the runas command in a command prompt

By typing in “runas /netonly /user:yourdomain\yourusername ssms.exe” you can start Management Studio with other credentials. You might need to replace ssms.exe with the full path name of ssms.exe (which should be under C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\VSShell\Common7\IDE\ or similar location).

run as

Using the run as different user-option in Windows Explorer

Another option is to click Shift-Ctrl-RightMouseButton on the icon to start Management Studio. You then get an option to run Management Studio as a different user.

Run As different user

Using Additional Connection Parameters

You can also specify a different user account by manually entering connection parameters. You’ll need to add: Integrated Security=SSPI;User ID=mydomain\myuser;Password=yourpassword.

Management Studio

Comments Off

Filed under Technical

SQL Server 2014 CTP2 released

At PASS Summit last week, Quentin Clark announced the release of SQL Server 2014 CTP2. This release is feature complete, which means you can test all the new functionalities of SQL Server 2014. I have described the main news of SQL Server 2014 in this earlier blog post.

The download is available on this SQL Server 2014 webpage.

More information in detail is available on the official SQL Server product team blog. No doubt the most exciting from my perspective is the In-Memory capabilities, enabling up to 30X performance gains, 100X faster star-join queries (typically used by data warehouses) and up to 90% disk space savings. That’s a really good reason for starting to look into SQL Server 2014.

Comments Off

Filed under Technical

Finding out where your disk space is being used

I ran out of disk space on my Windows 8 laptop. This was very strange since I had plenty of free space a week ago and I haven’t done anything that should have consumed this disk space. I freed up a few GB by deleting unused files, but soon again this unused disk space was consumed. Even as I watched and clicked the Refresh-button every 10 seconds, I could see the free disk space steadily decreasing.

Was this some kind of malicious software? Or an application that kept writing to a log file? Or some performance monitoring that I had forgotten to deactivate? I had no clue and I really had to solve this, or my laptop was more or less useless.

I managed to find a free tool, SpaceSniffer, that really helped. It visualizes your disk usage as a Treemap – with an overview and the possibility to “drill down” / zoom into the details of your disk usage by simply double-clicking on an object. This is like a Business Inteligence-tool for analysing your disk usage. However, it still couldn’t find what was causing my problem. Very strangely, the total disk space reported by the tool was much lower than the real capacity of the disk. This was very worrying – I had to continue looking…

I tried another trick. Maybe the tool wasn’t able to investigate all of my folders. I started SpaceSniffer with the “Run as administrator” option in Windows. Suddenly it found all of my disk space!

The culprit was something called “Windows Search service”, that stored an index file that took a terrific amount of space. The description of the service says “Provides content indexing, property caching, and search results for files, e-mail, and other content”. I don’t know if it’s a bug in the Windows Search service or if it’s because of my large amount of emails, but it had generated a file of 70 GB (on my 220 GB disk) and constantly growing! No wonder I was running out of space. The solution was to stop that service and delete the large file.

So my recommendation if you want to find out how your disk space is being used, try SpaceSniffer and run it as administrator.

2 Comments

Filed under Technical

What’s new in SQL Server 2014?

Now that we though that nothing can get any better than SQL Server 2012, Microsoft has started announcing news around SQL Server 2014. There’s even a SQL Server 2014 web site where you can register to be notified when the trial version is available.

So, what’s new and cool?

Hekaton in-memory database capabilities

Hekaton makes it possible to really speed up databases by storing objects in RAM memory. With the really cheap memory prices today, this is a very interesting feature. The name Hekaton (which means a hundred in Greek) indicates that the goal from Microsoft is to make databases 100 times faster. This is very useful both for OLTP and DW databases (in the ETL-process). The main difference compared to most competitors is that Hekaton makes it very easy to mix in-memory data with legacy data, uses natively compiled stored procedures to increase performance further and provides a new hash index optimized for memory access. More info is available in this white paper by Kalen Delaney.

AlwaysOn improvements

With SQL Server 2014 you will be able to have up to 8 secondaries (instead of maximum 4 in SQL Server 2012). Also your readable secondaries will remain online and readable when your primary node becomes unavailable (as opposed to SQL Server 2012 where the readable secondaries become unavailable).

There’s also a new wizard that let’s you easily add a secondary node that’s being hosted in Azure as a Virtual Machine. It means you can setup a AlwaysOn secondary without having to invest in another server or building a new data center.

Updateable clustered columnstore indexes

SQL Server 2012 introduced columnstore indexes that can give a huge performance boosts, especially for large fact tables in data warehouses. However, they still relied on a traditional rowstore of data behind the columnstore index. Now in SQL Server 2014, you can truly store your data in a columnstore by using clustered columnstore indexes. Another limitation that has been lifted is that they can now be updated (INSERT, UPDATE, DELETE) as any other table. These are really huge benefits compared to SQL Server 2012.

Resource Governor for IO

The Resource Governor is a very important feature for consolidating databases and managing CPU and Memory between them. However, there has been no way to control IO, which is also a very important asset to manage. This new feature will make SQL Server 2014 even better for “SQL Server hotels” or other consolidated environments.

Further Reading

Comments Off

Filed under Technical

PASS SQLRally Nordic is back!

We’re happy to announce there will be another PASS SQLRally Nordic again this year. Almost 2 years ago we started in Stockholm and now we are back! It will be held 4-6 November at the Clarion Hotel Arlanda Airport, near Stockholm.

Last time it was a sell-out a month before the event started. This year we will have an even bigger conference center, more speakers and an even more exciting programme. As co-founder of SQLRally Nordic I’m very happy with the earlier success and how SQLRally Nordic has become one of the world’s top SQL Server Conferences.

One of the news this year is a special 1-day Executive Track for decision makers and managers. We will have some of the top Business Intelligence and SQL Server key people in the world present, which means we can deliver very interesting content also for executives.

Looking forward to see you. Don’t forget to register by 10 May to get a 300 EURO discount!

PASS SQLRally Nordic 2013 top banner

By the way – last time in Stockholm we had a Viking theme. Can you guess from our graphical profile what our theme is this year?

Comments Off

Filed under Technical

SQL Server – an Enterprise Ready and Mission Critical Database Platform?

When I started with computers in the early 1980s, my personal computer used a cassette recorder to store data. This was the main option if you couldn’t afford a mainframe computer, which most businesses couldn’t. Enterprise Readiness meant that you maybe could get up and running again within a few days if your computer broke or your tapes/disks were damaged.

What requirements do we have of Enterprise Ready database platforms? I’d suggest four areas:

  • Availability
  • Security
  • Consistency
  • Performance/scalability

How has SQL Server supported these requirements over time?

SQL Server 2000

I think SQL Server 2000 was the first version where we could really talk about Enterprise Readiness. Let’s look at the news from an Enterprise Readiness perspective.

Failover Clustering

We all know that computers break. It could be network cards, power supplies, etc. Windows 2000 introduced support for WSFC/MSCS (Windows Server Failover Clustering), which means that on the Operating System level it can handle automatic failover when a server goes down. Basically you have an active node and a number of passive nodes pointing to the same disk. This required a shared disk, and the only kind of shared disks supported was a SAN (Storage Area Network). If the SAN broke down – well then you did have a big problem.

Replication

Another High Availability feature of SQL Server 2000 was Replication. It allows multiple databases to synchronise tables. It could even synchronise SQL Server with completely different databases (for example Oracle). The synchronisation could be done using some different methods, including transactional replication – which meant it used the transaction logs to continously track changes, or snapshot replication – which meant it periodically compared the data. However, Replication doesn’t come with any kind of automatic failover handling.

Log Shipping

Log Shipping is a simple, yet useful technology. By taking regular backups of the transaction log of a database and replaying them on a second database (by restoring them), the second database will be synchronized with the main database. If you for example take backups every 10 minutes, you will never lose more than 10 minutes of data. This is useful for Disaster Recovery scenarios. Also the delay can be an advantage because if you do something really bad with the main database, you can potentially restore objects from the second database.

64-bit support

In 2003, the SQL Server 2000 64-bit edition was released. Going from 32 to 64 bits means you can handle much larger addressable memory and is crucial for large databases. While 32 bits means you have an addressable memory of about 4 GB, 64 bits means about 16 million Terrabytes in limit. Since there is still no database in the world larger than about 1 million Terrabytes (as far as I know), this “ought to be enough” for a few yeare more.

ACID support

This feature was not new to SQL Server 2000, but it’s so important that I want to include it here. The term ACID was coined in the early 1980s as an acronym for Atomicity, Consistency, Isolation, Durability. This means that even if the power is shut down while updating the database, even if multiple applications write at the same time and even if you try to write something invalid to the database, it will handle all of this and still be reliable. This is supported by all modern relational databases, and this is the main reason that SQL Server has a transaction log (containing undo and redo records). The guarantee is that once an operation has been confirmed (which means it has been written to the transaction log), you can always trust that it will have updated the database. This is obviously crucial for example in a banking application where you must know that transactions suddenly doesn’t disappear, and this is one of the main reasons many applications use databases instead of just storing data directly in files.

SQL Server 2005

The 2005 version was another very big step in SQL Server history, especially on the Business Intelligence and Data Warehouse side. However, in this blog post I will cover only the traditional database features.

Database Mirroring

As an option to Failover Clustering, Microsoft introduced Database Mirroring. The main difference is that each server stores it’s data locally, so there is no dependency on a SAN. It can operate in both High Protection mode (with zero data loss) and High Performance mode (with asynchronous data transfer between the principal and mirror server). Another advantage compared to Failover Clustering is that the database on the mirror server can be used (in readonly-mode though). However there are limitations also with this technology, mostly that it cannot guarantee that multiple databases are “failed-over” at the same time.

Checksums

Something that has much increased the reliability of SQL Server databases was the introduction of a checksum Control of the data pages. This means that the SQL Server can detect pages that have been corrupted. Damaged databases can often be repaired if the damage is found early.

Instant File Initialization

This is a simple but major performance improvement. When a data file or the transaction log grows, you can configure so it doesn’t any longer have to fill the new space with zeroes. Filling the empty space with zeroes can take long time and could lead to long unresponsive times (minutes) if the database needed to autogrow.

Rows can be larger than 8000 bytes

In SQL Server 2000, each row of data had to fit in a single 8 KByte page. While I don’t recommend designing tables with huge rows, it is a big improvement that this limitation was removed in 2005. This definitely can make things easier for application developers.

T-SQL language improvements

The T-SQL language was very much improved in 2005. My favorite new features were TRY-CATCH blocks and CTEs (Common Table Expression, which allows recursion in SQL). Also recompilation of stored procedures became more efficient with statement-level recompiles instead of recompiling the whole stored procedure.

Plan Guides

Often you have no possibilities to do any changes to SQL queries, for example if you are running an application that you haven’t developed yourself. In 2005, SQL Server introduced something called Plan Guides, where you can tell it that when it recognizes a certain SQL Query (or runs a certain stored procedure) it should modify it’s Execution Plan. The result is that you can do advanced performance tuning of SQL queries that come from 3rd party providers where you normally have no chance to edit the queries.

Optimistic concurrency support

An additional isolation level, SNAPSHOT, was introduced that does not use any write locks. This can solve problems with deadlocks and applications which spend most of their time waiting because of locking issues.

DMVs

SQL Server 2005 opened up new possibilities to diagnose the server and databases on a more detailed level through something called Dynamic Management Views and Dynamic Management Functions. These make it possible to check things like fragmentation, memory usage and statistics on how often an index is used. There are even DMVs that gives you suggestion on new indexes. These DMVs are very useful for performance tuning and troubleshooting.

Partitioning

Partitioning was another key feature that was introduced in SQL Server 2005. Large tables can benefit very much from being split into partitions. It can increase the performance as well as make it possible to spread your partitions on different disks. For instance you can put old data on a slower disk and new data on a faster disk. Also two applications can write to different partitions without disturbing each other (if you enable partition level locking).

Schemas

Schemas are a great improvement in how to manage security. Since SQL Server 2005, you can create schemas that you group your objects under. You can then grant users access to schemas, which means they will get access to all objects belonging to that schema. It’s like managing access on folder-level in Windows instead of managing it on file-level.

Asymmetric keys and certificates

Another important new security feature was the possibility to use Certificates and Asymmetric Keys in SQL Server 2005. That makes it possible to do strong authentication and create/verify digital signatures.

Business Intelligence news

As I mentioned in the beginning, the biggest news in SQL Server 2005 was maybe for Business Intelligence and Data Warehouse users. Analysis Services (SSAS) and Integration Services (SSIS or formerly known as DTS – Data Transformation Services) were completely rewritten and deserve an Enterprise Readiness history of their own. Reporting Services was introduced. At the end of the day this meant that BI/DW developers, since SQL Server 2005, got a strong and broad BI platform. It meant SQL Server could serve many more Enterprise needs than before.

SQL Server 2008

SQL Server 2008 is seen by some people as not having many news compared to SQL Server 2005. However I think the SQL Server 2008 news are very important from a Enterprise Readiness perspective.

Compression

Database compression was introduced in SQL Server 2008. The impact of compression should not be understimated, since it not only reduced database sizes but also often increased performance because compressed data uses less disk I/O (the most common bottleneck). Two compression options were introduced: ROW which was the simplest and PAGE, which required more resources to do the compression, but also gave higher compression rates.

Filestream

BLOBS (or binary large objects) is something that most database platforms support. They can be used to encapsulate pictures, documents or even video recordings. The Filestream feature in SQL Server 2008 allows BLOBS to be stored on a separate fileshare instead of within the database itself. This means that the BLOBS also can be accessed directly through the file system instead of through SQL Server. That can be a good performance benefit, by just enabling Filestream on tables with big BLOGS, but it also makes integrations easier in some cases.

Resource Governor

When consolidating and virtualizing SQL Server environments, it became increasingly important to ensure that no single user or application could use up too much resources. Like in a mainframe system you want to be able to prioritize users and applications, and set limits to their usage of CPU and memory. The Resource Governor in SQL Server 2008 allows resource pools to be defined and users/applications to be tied to specific resource pools. Again, this is very important from an Enterprise Readiness perspective where you want to run many databases on the same server.

Policy Based Management

The policy based management features in SQL Server 2008 were introduced as a way to enforce policies, restrictions and behaviours on the SQL Servers. It can be applied to multiple servers and monitored from a single console. It uses a simple expression syntax and is very powerful in what you can control, which could be anything from naming standards to database options (so they cannot be modified by accidental DBAs).

Transparent Database Encryption

An additional security features in SQL Server 2008 was the addition of Transparent Database Encryption (TDE). It encrypts the whole database automatically (no need to do any changes to applications). This is useful for databases containing sensitive information and where it is important that not anyone having access to the backups or files can read the information. A symmetric encryption key is used, that is protected from unauthorized access by the operating system.

PowerShell support

Powershell is a powerful scripting environment that enables IT professionals to write scripts that do administrative tasks. Since SQL Server 2008, it is also possible to do SQL Server administrative tasks from Powershell. Many database administrators use Powershell to automate their tasks.

Change Tracking

Change Data Capture (CDC) and Change Tracking (CT) were introduced in SQL Server 2008 to be able to capture a history all changes done to database tables. This is very useful for applications and other databases (such as Data Warehouse) that can subscribe to this change history.

Extended Events

Extended Events is a way of monitoring what’s going on inside of SQL Server. There are about 200 different extended events available in SQL Server 2008, which give a detailed information level that’s never been available before. Microsoft also announced that it will retire the old SQL Profiler and that all monitoring in the future should use Extended Events instead.

Optimize for Ad Hoc Workloads

This is a small but significant feature that was introduced in SQL Server 2008. When enabled, SQL Server is more restrictive in adding new plans to the plan cache, which can save a lot of valuable memory in environments where mostly Ad Hoc SQL queries are being used.

SQL Server 2008 R2

Not so many new features were added in SQL Server 2008 R2. It was more focused on improving performance and improving existing features, such as for example better compression of Unicode strings. Also it was here that Microsoft started to talk about Self Service BI and developed the in-memory analytics Engine xVelocity.

Multi server management in Management Studio

A new feature in Management Studio was that you can run a SQL Server Query against many servers at the same time. This is done by creating a server group in the Registered Servers window and running the query against the group.

Hyper-V Live Migration support

In Windows 2008 R2, a new feature was Hyper-V Live Migration. It allowed virtual machines to be moved from one server to another without stopping them. SQL Server 2008 R2 added support for this feature, so you could move your virtual SQL Servers without rebooting.

SQL Server 2012

AlwaysOn

A new High Availability solution called AlwaysOn was launched in SQL Server 2012. It combines the best of Failover Clustering and Database Mirroring. You are no longer dependent on a SAN and you can manage databases in Availability Groups that are tied together. Also the number of false failovers and downtime during failover has been reduced heavily compared to earlier technologies. Servers can now be in different geographical locations (as opposed to failover clustering in earlier versions of SQL Server). Also up to 4 secondaries (of which 2 can be synchronous) can be available and they can be used for running queries. Even backups can be run on the secondaries.

ColumnStore Indexes

ColumnStore Indexes are compressing the data column by column, and therefore can achieve very good compression ratios, especially for Data Warehouse fact tables. Another advantage is that queries that involve only a few columns will run much faster as SQL Server only needs to retrieve data for those columns and not the whole rows. The end results is that typical Data Warehouse workloads, or other workloads where you write seldom but read much, can get greatly improved performance.

SQL Server Data Tools

The purpose of SSDT (SQL Server Data Tools) is to give database developers equally good tools as application developers. SSDT is a plugin for Visual Studio that provides a new SQL Server development Environment including Refactoring, Unit Testing, Code Analysis, Source Control and Schema Compare, just to mention some of the benefits. The requirement for these features is that you have Visual Studio 2012 Professional or better.

Data-Tier Applications (DACPACs and BACPACs)

Deploying and upgrading SQL Server application databases have always been tricky and involved a lot of manual work. Now databases can be packaged into DACPACs (Data-Tier Applications), which are much easier to deploy. For instance you can autogenerate deployment scripts based on your existing database and a DACPAC. If you also want to include data, there’s the BACPAC. They work the same as a DACPAC, expect for the difference that they also include your data.

Contained Databases

One earlier problem with SQL Server databases has been that they are dependent on server-level (or rather instance) objects. For instance before SQL Server 2012 you couldn’t create user accounts with passwords in a database. You had to create a login with a password on the server and then tie the database user to that login. Now with more objects contained in the databases it is easier to move them from one server to another.

User-defined server roles

In SQL Server 2012, new server roles with desired permission sets can be created and users assigned to them. That helps managing the rights of database administrators and other staff easier.

Summary

There has been a lot of improvements of SQL Server between 2000 and 2012. I’ve here highlighted the ones I think are most important from an Enterprise Readiness perspective, but there are many other very important improvements also to Business Intelligence users and Database Developers that I’m not covering in this blog post.

1 Comment

Filed under Technical