SQL Saturday in Chicago (2016)

This past Saturday I attended SQL Saturday in Chicago (http://www.sqlsaturday.com/484/). It was a nice event and there were plenty of opportunities to network, get product demonstrations, and get some FREE SQL Server training.

Sessions I attended:

    • What’s New in SQL Server 2016 (Ross LoForte) – This was a fast moving presentation that gave a nice overview of things that are new to SQL Server 2016. It included visualizations and demonstrations of Query store (which would be similar to a flight recorder), live query stats, temporal tables (retaining previous copies of data), Json features, Always encrypted and row level security, dynamic data masking of sensitive data, enhancements to AlwaysOn availability groups (load balancing, standard edition support to deprecate mirroring), Polybase, advanced analytics (R scripts), mobile apps (Datazen), new features in SSRS (placing parameters), stretch databases, and backups to Azure. Free ebook: https://blogs.msdn.microsoft.com/microsoft_press/2015/12/22/free-ebook-introducing-microsoft-sql-server-2016-mission-critical-applications-deeper-insights-hyperscale-cloud-preview-edition/
    • Scripting out SQL Server for Documentation and Disaster Recovery (George Walkey) – This presentation focused on the presenters PowerShell scripts that he uses to document his SQL Servers. They seem like they could also be useful with disaster recovery, server migrations, upgrades, and auditing. He walked through some of the scripts, but it seemed that you would need to download and examine them to get a better understanding of what is happening (https://github.com/gwalkey/SQLTranscriptase).
    • Effective Data Visualization: The Ideas of Edward Tufte (David Giard) – The speaker in this session tried to simply explain some data visualization ideas from Edward Tufte (https://en.wikipedia.org/wiki/Edward_Tufte). The title of the session really fit the presentation. Some of the key takeaways were: visualizing data can be much better than looking at raw data for larger data sets, how some visualizations lie or hide data (lie factor), data ink ratio, and data density.
    • Big Data in the Modern Data Warehouse (Fred Bliss – Aptitive) – This session was presented over lunch and gave a nice introductory explanation to Big Data. One of the keys I noted from the speaker was to use Big Data as a data source to your data warehouse, once you determine the metrics that you need (http://www.aptitive.com/).
    • Reporting Services 2016 Solutions: The New Awesome (Paul Turley) – The speaker did a nice job presenting the hybrid approach that Microsoft is taking, focusing on both on-premise and cloud usage. He showed some new features of SSRS including: the new portal, parameter placement, KPIs, mobile reports, and Microsoft’s BI roadmap (https://sqlserverbiblog.wordpress.com/).


Overall, I had a great experience and look forward to future events like this.

Posted in Business Intelligence, DBA, PowerShell, Professional Networking, Training | Tagged , | Leave a comment

SQL Saturday in Chicago – 2016

There is a popular saying and it has been included in many songs, that “the best things in life are free”.  And really, that is one of the things I like the most about the SQL Server community.  There are always opportunities to take advantage of free training.  That being said, there are some upcoming events that I am personally looking forward to:

  1. SQL Saturday in Chicago – March 5, 2016 (http://www.sqlsaturday.com/484/eventhome.aspx).  There seems to be a nice variety of sessions and many vendors will be attending.  I mention the vendors because sometimes a small purchase of a product can save you hours of time.
  2. SQL PASS BA Webinar – Thursday, February 25, 2016 (http://passbaconference.com/2016/Learn/WebinarMarathon.aspx).  There are only a few sessions that will be presented, but they should provide a taste of what will be presented at the PASS Business Analytics Conference (http://passbaconference.com/2016/Home.aspx).
Posted in Professional Networking, Training | Tagged , | Leave a comment

Look for Server “Ouches” – By Collecting SQL Server Logs (Part 2)

In my last post, I detailed how you could look for server “ouches” by collecting the server’s event logs using PowerShell (https://sqlbadboy.wordpress.com/2015/12/07/look-for-server-ouches-by-collecting-server-logs/). However, at times you may want more detailed information concerning the SQL Server which may need to be obtained from the SQL Server logs.

And although I normally show code in PowerShell, this time I will simply show the T-SQL code. I am assuming it will be easy for someone to embed this T-SQL code in a larger PowerShell script or even an SSIS package for an automated collection.

How to read a SQL Server’s logs?

The key to reading the SQL Server logs will be using the undocumented stored procedure, xp_readerrorlog. This extended stored procedure has the following parameters:

  • Parameter 1 – Used to get a specified error log file. SQL Server keeps a number of historical error log files, so you can use this parameter if you want to read an older error log file. The default is 0, which is the current error log file.
  • Parameter 2 – Used to specify if you want to read from the SQL Server error log (default, value = 1) or SQL Agent log (value = 2).
  • Parameter 3 – Used to specify a string as a search parameter to filter the results. This needs to be specified to use Unicode, with syntax like N’search string’.
  • Parameter 4 – Used to specify an additional search parameter.
  • Parameter 5 – Used to filter out rows older than this datetime value.
  • Parameter 6 – Used to filter out rows newer than this datetime value.
  • Parameter 7 – Used to specify a sort order for the result. Valid values include either N’ASC’ or N’DESC’.

Another helpful undocumented stored procedure is xp_enumerrorlogs. This extended stored procedure lists all the error logs with their last change date. The default for the number of logs is 6, but it can be configured to use up to 99.

Here is a sample of code that will look for the last 3 days’ worth of entries in the error log.


SELECT @StartDatetime = GETDATE() - 3

EXEC master.dbo.xp_readerrorlog 0, 1, NULL, NULL, @StartDatetime

***NOTE: If you recycle the error logs daily, you will have to read multiple log files to obtain the entries for the last 3 days, as in our example. The error log can be recycled using sp_cycle_errorlog (https://msdn.microsoft.com/en-us/library/ms182512.aspx). It is recommended to recycle the error logs on a periodic basis or based on size so that they are easier to work with.

 So, here is a simple way to read the SQL Server error logs and look for “ouches”.

Posted in Uncategorized | Leave a comment

Look for Server “Ouches” – By Collecting Server Logs

At some point, most parents have asked their child to “show me where it hurts”. Perhaps a parent asks this question because their child has just yelled “ouch” or maybe the child seems to not be himself (i.e. lethargic). And the reason parents do this is because they want to help their child get better and catch any problem as early as possible.

In a similar way, we need to know when our servers are yelling “ouch” or don’t seem to be themselves. We can do this by examining the event logs, which basically is like the server talking to us, and by baselining the performance of our server.

Now, the easiest way to do both of these things is if we have some sort of monitoring software. Typically, monitoring software will examine server logs and baseline servers, and will alert us to any issues. This is the ideal situation …

However, what if we don’t have any monitoring software or we don’t have the monitoring software on all our servers. Well, then we may have to build our own process. And in this post, I am going to examine how to look for “ouches” in our server’s event logs.

How to read a server’s event logs?

First, I am going to acknowledge that you could go to each server and examine the event logs, but that seems like a horrible idea to me. Although, I have heard of some DBA’s doing that, the problem is that this method is not scalable. If you have one or two servers, ok, it could work. But, if you have hundreds of servers, it will never work.

So, that is where automation comes in and using PowerShell is my tool of choice. I am going to demonstrate some ways to read a server’s event logs.

Method 1: Use Get-EventLog (https://technet.microsoft.com/en-us/library/hh849834.aspx)

Get-EventLog -ComputerName $server -LogName “Application” -EntryType Error,Warning -After $logdate

Method 2: Use Get-WinEvent (https://technet.microsoft.com/en-us/library/hh849682.aspx)

Get-WinEvent -FilterHashtable @{LogName=’Application’; Level=1,2,3; StartTime=$logdate} -ComputerName $server

 As you can see, it is pretty simple to read a server’s event logs. Now, all you have to do is read the data into a table and report on it daily. This will also help you to keep a history of the “ouches”, in case you need it. But, there are some things to keep in mind:

  • For the best performance, filter the amount of records returned by the command by using the parameters EntryType when using Get-EventLog and/or Level when using Get-WinEvent. This will perform much better than piping the command to an output, which will read all the records, and then using a where clause for filtering. In my deployment, since I am looking for “ouches”, I only care about messages that are: critical, error, and warnings.
  • Get-EventLog is the older command and can’t read some of the newer event logs from Windows Vista and later. So, it is recommended to use Get-WinEvent for newer operating systems.
  • Get-WinEvent also lists event logs and event log providers. You can get events from selected logs or from logs generated by selected event providers. And, you can combine events from multiple sources in a single command. Get-WinEvent allows you to filter events by using XPath queries, structured XML queries, and simplified hash-table queries.


So, although there are multiple ways to look for “ouches” on a server, I always recommend a monitoring software solution or an automated process that can easily be built with PowerShell.

Posted in DBA | Tagged | 1 Comment

SQL PASS 2015 – Watch Portions Online

I just wanted to give a reminder that portions of last week’s PASS Summit 2015 can be watched online (http://www.sqlpass.org/summit/2015/Live.aspx).  Personally, I found the Day 2 keynote on the Internet of Things (IOT, https://en.wikipedia.org/wiki/Internet_of_Things) very interesting.  It is one of those buzz words we are hearing everywhere now, and it is good as a data professionals to know how that term fits in or will affect us.

Posted in Professional Networking, Training | Tagged | Leave a comment

Monitoring AlwaysOn

Just recently, I violated one of my guiding work principles as a DBA … Be Proactive, Not Reactive … What this principle means to me, is that I want to try to anticipate problems arising and fix them, rather than not noticing them until they bloom into a full-fledged issue that has a greater impact on my users.

Here was the situation … It was just after a maintenance window, and there were some infrastructure issues. I probably could stop the story there, but I will continue … On one of my servers, I am using AlwaysOn (https://msdn.microsoft.com/en-us/library/ff877884.aspx) and my disk space monitoring process noticed rapid growth on the primary replica with one of the database’s log file to the point of almost filling the drive. As I checked the log backups, everything was working fine. So I queried the log_reuse_wait_desc column of sys.databases (https://msdn.microsoft.com/en-us/library/ms178534.aspx), which will give a description of why the log file can’t be reused. And as you might have guessed, it pointed to AlwaysOn (AVAILABILITY_REPLICA). I immediately realized that the problem was that the data was not moving to the replica and I resumed the data movement once the underlying issue was fixed. And for those wondering, the issue was caused because the secondary replica, which was our DR server, had a networking issue. Although the Hyper-V host was up and the virtual server appeared up, it was not allowing any traffic in or out.

At this point, I realized that I had no monitoring in place to determine whether or not data is moving between the replicas or if a replica is falling behind. So, I needed to setup something and I went with Policy Based Management (https://msdn.microsoft.com/en-us/library/bb510667.aspx). The basic idea is that you define a condition that specifies the state of a management facet. A management facet is defined by as “a set of logical properties that are related to an area of management interest”. Then, you define a policy that references that condition and specify how and when to evaluate it.


How to use Policy Based Management to monitor AlwaysOn?

There is a great post, which I feel is a must read, about monitoring performance for AlwaysOn at: https://msdn.microsoft.com/en-us/library/dn135338.aspx.

In that post, it explains how data is synchronized, how to estimate recovery time, how to estimate how much data will be loss, and how to setup monitoring. As you can see in the picture above, the monitoring consists of setting up 3 conditions and 2 policies. The policies evaluate the current situation to see if the failover time will exceed 10 minutes or if the potential for data loss is greater than 1 hour.

So, there you have it, a simple way to monitor your AlwaysOn instances. Of course, some of the SQL Server monitoring solutions have built in functionality to do something similar. But, if you aren’t using one of them on that server, this is an easy way to implement something.

In my case, if I had setup this policy based management, I would have been alerted and the log drive would not nearly had grown so large. And I would have been proactive, instead of reactive.

Posted in DBA | Tagged , , | Leave a comment

Investigating Job Schedules

One of the very first blog posts I made was on knowing what happened last night (https://sqlbadboy.wordpress.com/2012/10/12/what-happened-last-night/). In that post, I mentioned various ways that we could find out the statuses of jobs that had run. And this is an essential task for a DBA.

However, we should also know what is supposed to happen and when. Although that may be easier said than done, it is not impossible. Generally, this type of knowledge will come with having experience within an organization. But, there are some things we can do to help speed up that process.

Why is it important to know when things are scheduled to run on your SQL Server?

  • If you know when things are scheduled to run, it can help you identify busy times for your SQL Servers. This is valuable information so that you can schedule your database maintenance at less busy times.
  • If you know when things are scheduled to run, it can help you to monitor business processes. For example, if you have a process that is supposed to run every morning and send a report by a certain time and the report is not sent. Then, you can assume there was a problem.
  • If you know when things are scheduled to run, it can help you with baselining servers and planning for hardware resources.

How to find when things are scheduled to run on your SQL Server?

  • Use the Microsoft system stored procedures:
  • Use T-SQL against the system tables (sysjobs, sysjobschedules, sysschedules) in the msdb database:
       sql_server_name = CAST(SERVERPROPERTY('ServerName') AS NVARCHAR(128)),
       job_name = j.name,
       schedule_name = s.name,
       is_scheduled_enabled = s.enabled,
       schedule_description =
              CASE s.freq_type
              WHEN 1 THEN 'Once on ' + CONVERT(VARCHAR, s.active_start_date) + ' at ' + CONVERT(VARCHAR, s.active_start_time)  
              WHEN 4 THEN 'Every day ' 
              WHEN 8 THEN 'Every ' + CONVERT(NVARCHAR, s.freq_recurrence_factor) + ' week(s) on ' + 
                     CASE WHEN s.freq_interval & 1 = 1 THEN 'Sunday, ' ELSE '' END +
                     CASE WHEN s.freq_interval & 2 = 2 THEN 'Monday, ' ELSE '' END +
                     CASE WHEN s.freq_interval & 4 = 4 THEN 'Tuesday, ' ELSE '' END +
                     CASE WHEN s.freq_interval & 8 = 8 THEN 'Wednesday, ' ELSE '' END +
                     CASE WHEN s.freq_interval & 16 = 16 THEN 'Thursday, ' ELSE '' END +
                     CASE WHEN s.freq_interval & 32 = 32 THEN 'Friday, ' ELSE '' END +
                     CASE WHEN s.freq_interval & 64 = 64 THEN 'Saturday, ' ELSE '' END 
              WHEN 16 THEN 'Every ' + CONVERT(VARCHAR, s.freq_recurrence_factor) + ' months(s) on day ' + CONVERT(VARCHAR, s.freq_interval) + ' of that month '  
              WHEN 32 THEN 'Every ' + CONVERT(VARCHAR, s.freq_recurrence_factor) + ' months(s) on the ' + 
                     CASE s.freq_relative_interval  
                     WHEN 1 THEN 'first '  
                     WHEN 2 THEN 'second '  
                     WHEN 4 THEN 'third '  
                     WHEN 8 THEN 'fourth '  
                     WHEN 16 THEN 'last '  
                     END +  
                     WHEN (s.freq_interval > 00) AND (s.freq_interval < 08) 
                           THEN DATENAME(dw, '1996120' + CONVERT(VARCHAR, s.freq_interval))  
                     WHEN (s.freq_interval = 08) THEN 'day'  
                     WHEN (s.freq_interval = 09) THEN 'week day'  
                     WHEN (s.freq_interval = 10) THEN 'weekend day'  
                     END + ' of that month '  
              WHEN 64 THEN 'Automatically starts when SQLServerAgent starts.'
              WHEN 128 THEN 'Runs when the computer is idle.'
              END +
              CASE s.freq_subday_type  
              WHEN 1 THEN 'at ' + CONVERT(VARCHAR, s.active_start_time)  
              WHEN 2 THEN 'every ' + CONVERT(VARCHAR, s.freq_subday_interval) + ' second(s)'  
              WHEN 4 THEN 'every ' + CONVERT(VARCHAR, s.freq_subday_interval) + ' minute(s)'  
              WHEN 8 THEN 'every ' + CONVERT(VARCHAR, s.freq_subday_interval) + ' hour(s)'  
              ELSE ''
              END + 
              WHEN s.freq_subday_type IN (2, 4, 8)  
                     THEN ' between ' + CONVERT(VARCHAR, s.active_start_time) + ' and ' + CONVERT(VARCHAR, s.active_end_time)  
              ELSE ''
FROM msdb.dbo.sysjobs j
       INNER JOIN msdb.dbo.sysjobschedules js
              ON j.job_id = js.job_id
       INNER JOIN msdb.dbo.sysschedules s
              ON js.schedule_id = s.schedule_id
ORDER BY j.name

So, hopefully this post will make it just a little bit easier to find out what is going to happen, when it comes to our SQL Servers.

Posted in DBA, TSQL | Tagged , | Leave a comment